Test Report: KVM_Linux 19575

                    
                      7bfa33b863353ea74c2dd2110cc17945d6c51e0f:2024-09-04:36080
                    
                

Test fail (2/341)

Order failed test Duration
33 TestAddons/parallel/Registry 74.51
111 TestFunctional/parallel/License 0.13
x
+
TestAddons/parallel/Registry (74.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.048863ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-4pdpl" [bc53f46d-169c-40d2-af79-12a0385b32f2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004089095s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cmjxq" [8268ad2e-f9a0-47f5-ba61-028fe8e93107] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003984152s
addons_test.go:342: (dbg) Run:  kubectl --context addons-586464 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-586464 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-586464 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.090688932s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-586464 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 ip
2024/09/04 19:39:17 [DEBUG] GET http://192.168.39.55:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-586464 -n addons-586464
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-144619                                                                     | download-only-144619 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC | 04 Sep 24 19:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-787676 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC |                     |
	|         | binary-mirror-787676                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41895                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-787676                                                                     | binary-mirror-787676 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC | 04 Sep 24 19:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC |                     |
	|         | addons-586464                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC |                     |
	|         | addons-586464                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-586464 --wait=true                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC | 04 Sep 24 19:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:29 UTC | 04 Sep 24 19:30 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | addons-586464                                                                               |                      |         |         |                     |                     |
	| addons  | addons-586464 addons                                                                        | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | -p addons-586464                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | addons-586464                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | -p addons-586464                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-586464 ssh cat                                                                       | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | /opt/local-path-provisioner/pvc-fce74acb-36f4-4b36-9f36-2a553b5bb45c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-586464 addons                                                                        | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:38 UTC | 04 Sep 24 19:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-586464 ssh curl -s                                                                   | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-586464 addons                                                                        | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-586464 ip                                                                            | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-586464 ip                                                                            | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	| addons  | addons-586464 addons disable                                                                | addons-586464        | jenkins | v1.34.0 | 04 Sep 24 19:39 UTC | 04 Sep 24 19:39 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 19:25:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 19:25:39.806921   13059 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:25:39.807028   13059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:25:39.807038   13059 out.go:358] Setting ErrFile to fd 2...
	I0904 19:25:39.807043   13059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:25:39.807203   13059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 19:25:39.807764   13059 out.go:352] Setting JSON to false
	I0904 19:25:39.808600   13059 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":488,"bootTime":1725477452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 19:25:39.808696   13059 start.go:139] virtualization: kvm guest
	I0904 19:25:39.810817   13059 out.go:177] * [addons-586464] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 19:25:39.812112   13059 notify.go:220] Checking for updates...
	I0904 19:25:39.812146   13059 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 19:25:39.813647   13059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 19:25:39.815182   13059 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	I0904 19:25:39.816685   13059 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:25:39.818190   13059 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 19:25:39.819727   13059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 19:25:39.821082   13059 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 19:25:39.853205   13059 out.go:177] * Using the kvm2 driver based on user configuration
	I0904 19:25:39.854782   13059 start.go:297] selected driver: kvm2
	I0904 19:25:39.854800   13059 start.go:901] validating driver "kvm2" against <nil>
	I0904 19:25:39.854812   13059 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 19:25:39.855468   13059 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 19:25:39.855572   13059 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19575-5257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 19:25:39.870403   13059 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0904 19:25:39.870457   13059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 19:25:39.870640   13059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 19:25:39.870669   13059 cni.go:84] Creating CNI manager for ""
	I0904 19:25:39.870679   13059 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 19:25:39.870689   13059 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0904 19:25:39.870736   13059 start.go:340] cluster config:
	{Name:addons-586464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-586464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoP
auseInterval:1m0s}
	I0904 19:25:39.870820   13059 iso.go:125] acquiring lock: {Name:mke56ad6fec9dae1744ebaac12ff812ec06347d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 19:25:39.873563   13059 out.go:177] * Starting "addons-586464" primary control-plane node in "addons-586464" cluster
	I0904 19:25:39.875090   13059 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 19:25:39.875127   13059 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19575-5257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0904 19:25:39.875137   13059 cache.go:56] Caching tarball of preloaded images
	I0904 19:25:39.875229   13059 preload.go:172] Found /home/jenkins/minikube-integration/19575-5257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0904 19:25:39.875246   13059 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0904 19:25:39.875538   13059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/config.json ...
	I0904 19:25:39.875559   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/config.json: {Name:mkb3fc44960047662a40ab8e15c26ef8e13922af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:25:39.875676   13059 start.go:360] acquireMachinesLock for addons-586464: {Name:mkc4ed759d6ca8e91fb8a1055a87f308832414ca Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0904 19:25:39.875717   13059 start.go:364] duration metric: took 29.66µs to acquireMachinesLock for "addons-586464"
	I0904 19:25:39.875741   13059 start.go:93] Provisioning new machine with config: &{Name:addons-586464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-586464 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 19:25:39.875791   13059 start.go:125] createHost starting for "" (driver="kvm2")
	I0904 19:25:39.877675   13059 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0904 19:25:39.877849   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:25:39.877889   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:25:39.892143   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0904 19:25:39.892596   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:25:39.893162   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:25:39.893181   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:25:39.893514   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:25:39.893684   13059 main.go:141] libmachine: (addons-586464) Calling .GetMachineName
	I0904 19:25:39.893831   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:25:39.893975   13059 start.go:159] libmachine.API.Create for "addons-586464" (driver="kvm2")
	I0904 19:25:39.894013   13059 client.go:168] LocalClient.Create starting
	I0904 19:25:39.894047   13059 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca.pem
	I0904 19:25:40.191923   13059 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/cert.pem
	I0904 19:25:40.260270   13059 main.go:141] libmachine: Running pre-create checks...
	I0904 19:25:40.260307   13059 main.go:141] libmachine: (addons-586464) Calling .PreCreateCheck
	I0904 19:25:40.260788   13059 main.go:141] libmachine: (addons-586464) Calling .GetConfigRaw
	I0904 19:25:40.261279   13059 main.go:141] libmachine: Creating machine...
	I0904 19:25:40.261295   13059 main.go:141] libmachine: (addons-586464) Calling .Create
	I0904 19:25:40.261509   13059 main.go:141] libmachine: (addons-586464) Creating KVM machine...
	I0904 19:25:40.262788   13059 main.go:141] libmachine: (addons-586464) DBG | found existing default KVM network
	I0904 19:25:40.263529   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:40.263363   13081 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0904 19:25:40.263552   13059 main.go:141] libmachine: (addons-586464) DBG | created network xml: 
	I0904 19:25:40.263569   13059 main.go:141] libmachine: (addons-586464) DBG | <network>
	I0904 19:25:40.263579   13059 main.go:141] libmachine: (addons-586464) DBG |   <name>mk-addons-586464</name>
	I0904 19:25:40.263592   13059 main.go:141] libmachine: (addons-586464) DBG |   <dns enable='no'/>
	I0904 19:25:40.263601   13059 main.go:141] libmachine: (addons-586464) DBG |   
	I0904 19:25:40.263612   13059 main.go:141] libmachine: (addons-586464) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0904 19:25:40.263622   13059 main.go:141] libmachine: (addons-586464) DBG |     <dhcp>
	I0904 19:25:40.263666   13059 main.go:141] libmachine: (addons-586464) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0904 19:25:40.263694   13059 main.go:141] libmachine: (addons-586464) DBG |     </dhcp>
	I0904 19:25:40.263704   13059 main.go:141] libmachine: (addons-586464) DBG |   </ip>
	I0904 19:25:40.263712   13059 main.go:141] libmachine: (addons-586464) DBG |   
	I0904 19:25:40.263719   13059 main.go:141] libmachine: (addons-586464) DBG | </network>
	I0904 19:25:40.263726   13059 main.go:141] libmachine: (addons-586464) DBG | 
	I0904 19:25:40.269449   13059 main.go:141] libmachine: (addons-586464) DBG | trying to create private KVM network mk-addons-586464 192.168.39.0/24...
	I0904 19:25:40.332264   13059 main.go:141] libmachine: (addons-586464) Setting up store path in /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464 ...
	I0904 19:25:40.332306   13059 main.go:141] libmachine: (addons-586464) DBG | private KVM network mk-addons-586464 192.168.39.0/24 created
	I0904 19:25:40.332322   13059 main.go:141] libmachine: (addons-586464) Building disk image from file:///home/jenkins/minikube-integration/19575-5257/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0904 19:25:40.332333   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:40.332203   13081 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:25:40.332362   13059 main.go:141] libmachine: (addons-586464) Downloading /home/jenkins/minikube-integration/19575-5257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19575-5257/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0904 19:25:40.584009   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:40.583866   13081 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa...
	I0904 19:25:40.750874   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:40.750764   13081 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/addons-586464.rawdisk...
	I0904 19:25:40.750897   13059 main.go:141] libmachine: (addons-586464) DBG | Writing magic tar header
	I0904 19:25:40.750908   13059 main.go:141] libmachine: (addons-586464) DBG | Writing SSH key tar header
	I0904 19:25:40.750915   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:40.750877   13081 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464 ...
	I0904 19:25:40.750961   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464
	I0904 19:25:40.750983   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19575-5257/.minikube/machines
	I0904 19:25:40.750994   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:25:40.751007   13059 main.go:141] libmachine: (addons-586464) Setting executable bit set on /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464 (perms=drwx------)
	I0904 19:25:40.751015   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19575-5257
	I0904 19:25:40.751025   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0904 19:25:40.751030   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home/jenkins
	I0904 19:25:40.751036   13059 main.go:141] libmachine: (addons-586464) DBG | Checking permissions on dir: /home
	I0904 19:25:40.751042   13059 main.go:141] libmachine: (addons-586464) DBG | Skipping /home - not owner
	I0904 19:25:40.751054   13059 main.go:141] libmachine: (addons-586464) Setting executable bit set on /home/jenkins/minikube-integration/19575-5257/.minikube/machines (perms=drwxr-xr-x)
	I0904 19:25:40.751066   13059 main.go:141] libmachine: (addons-586464) Setting executable bit set on /home/jenkins/minikube-integration/19575-5257/.minikube (perms=drwxr-xr-x)
	I0904 19:25:40.751082   13059 main.go:141] libmachine: (addons-586464) Setting executable bit set on /home/jenkins/minikube-integration/19575-5257 (perms=drwxrwxr-x)
	I0904 19:25:40.751096   13059 main.go:141] libmachine: (addons-586464) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0904 19:25:40.751109   13059 main.go:141] libmachine: (addons-586464) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0904 19:25:40.751119   13059 main.go:141] libmachine: (addons-586464) Creating domain...
	I0904 19:25:40.752173   13059 main.go:141] libmachine: (addons-586464) define libvirt domain using xml: 
	I0904 19:25:40.752202   13059 main.go:141] libmachine: (addons-586464) <domain type='kvm'>
	I0904 19:25:40.752224   13059 main.go:141] libmachine: (addons-586464)   <name>addons-586464</name>
	I0904 19:25:40.752241   13059 main.go:141] libmachine: (addons-586464)   <memory unit='MiB'>4000</memory>
	I0904 19:25:40.752254   13059 main.go:141] libmachine: (addons-586464)   <vcpu>2</vcpu>
	I0904 19:25:40.752266   13059 main.go:141] libmachine: (addons-586464)   <features>
	I0904 19:25:40.752278   13059 main.go:141] libmachine: (addons-586464)     <acpi/>
	I0904 19:25:40.752289   13059 main.go:141] libmachine: (addons-586464)     <apic/>
	I0904 19:25:40.752300   13059 main.go:141] libmachine: (addons-586464)     <pae/>
	I0904 19:25:40.752310   13059 main.go:141] libmachine: (addons-586464)     
	I0904 19:25:40.752321   13059 main.go:141] libmachine: (addons-586464)   </features>
	I0904 19:25:40.752337   13059 main.go:141] libmachine: (addons-586464)   <cpu mode='host-passthrough'>
	I0904 19:25:40.752349   13059 main.go:141] libmachine: (addons-586464)   
	I0904 19:25:40.752362   13059 main.go:141] libmachine: (addons-586464)   </cpu>
	I0904 19:25:40.752371   13059 main.go:141] libmachine: (addons-586464)   <os>
	I0904 19:25:40.752383   13059 main.go:141] libmachine: (addons-586464)     <type>hvm</type>
	I0904 19:25:40.752393   13059 main.go:141] libmachine: (addons-586464)     <boot dev='cdrom'/>
	I0904 19:25:40.752401   13059 main.go:141] libmachine: (addons-586464)     <boot dev='hd'/>
	I0904 19:25:40.752414   13059 main.go:141] libmachine: (addons-586464)     <bootmenu enable='no'/>
	I0904 19:25:40.752429   13059 main.go:141] libmachine: (addons-586464)   </os>
	I0904 19:25:40.752442   13059 main.go:141] libmachine: (addons-586464)   <devices>
	I0904 19:25:40.752454   13059 main.go:141] libmachine: (addons-586464)     <disk type='file' device='cdrom'>
	I0904 19:25:40.752472   13059 main.go:141] libmachine: (addons-586464)       <source file='/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/boot2docker.iso'/>
	I0904 19:25:40.752484   13059 main.go:141] libmachine: (addons-586464)       <target dev='hdc' bus='scsi'/>
	I0904 19:25:40.752493   13059 main.go:141] libmachine: (addons-586464)       <readonly/>
	I0904 19:25:40.752504   13059 main.go:141] libmachine: (addons-586464)     </disk>
	I0904 19:25:40.752518   13059 main.go:141] libmachine: (addons-586464)     <disk type='file' device='disk'>
	I0904 19:25:40.752531   13059 main.go:141] libmachine: (addons-586464)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0904 19:25:40.752548   13059 main.go:141] libmachine: (addons-586464)       <source file='/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/addons-586464.rawdisk'/>
	I0904 19:25:40.752560   13059 main.go:141] libmachine: (addons-586464)       <target dev='hda' bus='virtio'/>
	I0904 19:25:40.752588   13059 main.go:141] libmachine: (addons-586464)     </disk>
	I0904 19:25:40.752610   13059 main.go:141] libmachine: (addons-586464)     <interface type='network'>
	I0904 19:25:40.752656   13059 main.go:141] libmachine: (addons-586464)       <source network='mk-addons-586464'/>
	I0904 19:25:40.752680   13059 main.go:141] libmachine: (addons-586464)       <model type='virtio'/>
	I0904 19:25:40.752689   13059 main.go:141] libmachine: (addons-586464)     </interface>
	I0904 19:25:40.752694   13059 main.go:141] libmachine: (addons-586464)     <interface type='network'>
	I0904 19:25:40.752703   13059 main.go:141] libmachine: (addons-586464)       <source network='default'/>
	I0904 19:25:40.752707   13059 main.go:141] libmachine: (addons-586464)       <model type='virtio'/>
	I0904 19:25:40.752715   13059 main.go:141] libmachine: (addons-586464)     </interface>
	I0904 19:25:40.752720   13059 main.go:141] libmachine: (addons-586464)     <serial type='pty'>
	I0904 19:25:40.752725   13059 main.go:141] libmachine: (addons-586464)       <target port='0'/>
	I0904 19:25:40.752732   13059 main.go:141] libmachine: (addons-586464)     </serial>
	I0904 19:25:40.752738   13059 main.go:141] libmachine: (addons-586464)     <console type='pty'>
	I0904 19:25:40.752752   13059 main.go:141] libmachine: (addons-586464)       <target type='serial' port='0'/>
	I0904 19:25:40.752760   13059 main.go:141] libmachine: (addons-586464)     </console>
	I0904 19:25:40.752765   13059 main.go:141] libmachine: (addons-586464)     <rng model='virtio'>
	I0904 19:25:40.752794   13059 main.go:141] libmachine: (addons-586464)       <backend model='random'>/dev/random</backend>
	I0904 19:25:40.752811   13059 main.go:141] libmachine: (addons-586464)     </rng>
	I0904 19:25:40.752823   13059 main.go:141] libmachine: (addons-586464)     
	I0904 19:25:40.752833   13059 main.go:141] libmachine: (addons-586464)     
	I0904 19:25:40.752858   13059 main.go:141] libmachine: (addons-586464)   </devices>
	I0904 19:25:40.752870   13059 main.go:141] libmachine: (addons-586464) </domain>
	I0904 19:25:40.752889   13059 main.go:141] libmachine: (addons-586464) 
	I0904 19:25:40.758935   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:e5:db:df in network default
	I0904 19:25:40.759461   13059 main.go:141] libmachine: (addons-586464) Ensuring networks are active...
	I0904 19:25:40.759481   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:40.760050   13059 main.go:141] libmachine: (addons-586464) Ensuring network default is active
	I0904 19:25:40.760361   13059 main.go:141] libmachine: (addons-586464) Ensuring network mk-addons-586464 is active
	I0904 19:25:40.760856   13059 main.go:141] libmachine: (addons-586464) Getting domain xml...
	I0904 19:25:40.761528   13059 main.go:141] libmachine: (addons-586464) Creating domain...
	I0904 19:25:42.177937   13059 main.go:141] libmachine: (addons-586464) Waiting to get IP...
	I0904 19:25:42.178671   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:42.178995   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:42.179018   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:42.178991   13081 retry.go:31] will retry after 256.451078ms: waiting for machine to come up
	I0904 19:25:42.437488   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:42.437976   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:42.437998   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:42.437948   13081 retry.go:31] will retry after 281.709115ms: waiting for machine to come up
	I0904 19:25:42.721367   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:42.721768   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:42.721786   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:42.721720   13081 retry.go:31] will retry after 397.001929ms: waiting for machine to come up
	I0904 19:25:43.120405   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:43.120747   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:43.120775   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:43.120702   13081 retry.go:31] will retry after 373.67778ms: waiting for machine to come up
	I0904 19:25:43.496111   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:43.496510   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:43.496549   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:43.496454   13081 retry.go:31] will retry after 630.578554ms: waiting for machine to come up
	I0904 19:25:44.128282   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:44.128865   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:44.128892   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:44.128818   13081 retry.go:31] will retry after 775.846252ms: waiting for machine to come up
	I0904 19:25:44.906493   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:44.906899   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:44.906925   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:44.906848   13081 retry.go:31] will retry after 807.03143ms: waiting for machine to come up
	I0904 19:25:45.715523   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:45.715979   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:45.716000   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:45.715938   13081 retry.go:31] will retry after 1.425861169s: waiting for machine to come up
	I0904 19:25:47.143533   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:47.143922   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:47.143943   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:47.143876   13081 retry.go:31] will retry after 1.412081275s: waiting for machine to come up
	I0904 19:25:48.558311   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:48.558681   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:48.558701   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:48.558645   13081 retry.go:31] will retry after 1.719735759s: waiting for machine to come up
	I0904 19:25:50.279587   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:50.280032   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:50.280066   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:50.280011   13081 retry.go:31] will retry after 2.878600378s: waiting for machine to come up
	I0904 19:25:53.162042   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:53.162396   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:53.162418   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:53.162357   13081 retry.go:31] will retry after 2.534106174s: waiting for machine to come up
	I0904 19:25:55.698542   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:55.698902   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:55.698928   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:55.698876   13081 retry.go:31] will retry after 3.919715795s: waiting for machine to come up
	I0904 19:25:59.622878   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:25:59.623265   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find current IP address of domain addons-586464 in network mk-addons-586464
	I0904 19:25:59.623286   13059 main.go:141] libmachine: (addons-586464) DBG | I0904 19:25:59.623219   13081 retry.go:31] will retry after 4.757105837s: waiting for machine to come up
	I0904 19:26:04.381585   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.381985   13059 main.go:141] libmachine: (addons-586464) Found IP for machine: 192.168.39.55
	I0904 19:26:04.381998   13059 main.go:141] libmachine: (addons-586464) Reserving static IP address...
	I0904 19:26:04.382012   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has current primary IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.382348   13059 main.go:141] libmachine: (addons-586464) DBG | unable to find host DHCP lease matching {name: "addons-586464", mac: "52:54:00:bb:04:2c", ip: "192.168.39.55"} in network mk-addons-586464
	I0904 19:26:04.455837   13059 main.go:141] libmachine: (addons-586464) DBG | Getting to WaitForSSH function...
	I0904 19:26:04.455868   13059 main.go:141] libmachine: (addons-586464) Reserved static IP address: 192.168.39.55
	I0904 19:26:04.455915   13059 main.go:141] libmachine: (addons-586464) Waiting for SSH to be available...
	I0904 19:26:04.458392   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.458758   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:04.458786   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.458903   13059 main.go:141] libmachine: (addons-586464) DBG | Using SSH client type: external
	I0904 19:26:04.458986   13059 main.go:141] libmachine: (addons-586464) DBG | Using SSH private key: /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa (-rw-------)
	I0904 19:26:04.459013   13059 main.go:141] libmachine: (addons-586464) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0904 19:26:04.459026   13059 main.go:141] libmachine: (addons-586464) DBG | About to run SSH command:
	I0904 19:26:04.459039   13059 main.go:141] libmachine: (addons-586464) DBG | exit 0
	I0904 19:26:04.593314   13059 main.go:141] libmachine: (addons-586464) DBG | SSH cmd err, output: <nil>: 
	I0904 19:26:04.593611   13059 main.go:141] libmachine: (addons-586464) KVM machine creation complete!
	I0904 19:26:04.594041   13059 main.go:141] libmachine: (addons-586464) Calling .GetConfigRaw
	I0904 19:26:04.594530   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:04.594766   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:04.594929   13059 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0904 19:26:04.594944   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:04.596194   13059 main.go:141] libmachine: Detecting operating system of created instance...
	I0904 19:26:04.596209   13059 main.go:141] libmachine: Waiting for SSH to be available...
	I0904 19:26:04.596217   13059 main.go:141] libmachine: Getting to WaitForSSH function...
	I0904 19:26:04.596225   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:04.598741   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.599071   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:04.599091   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.599264   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:04.599488   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.599635   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.599773   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:04.599907   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:04.600069   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:04.600079   13059 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0904 19:26:04.700398   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 19:26:04.700417   13059 main.go:141] libmachine: Detecting the provisioner...
	I0904 19:26:04.700425   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:04.702891   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.703275   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:04.703304   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.703414   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:04.703601   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.703737   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.703858   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:04.703997   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:04.704197   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:04.704209   13059 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0904 19:26:04.805625   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0904 19:26:04.805709   13059 main.go:141] libmachine: found compatible host: buildroot
	I0904 19:26:04.805721   13059 main.go:141] libmachine: Provisioning with buildroot...
	I0904 19:26:04.805731   13059 main.go:141] libmachine: (addons-586464) Calling .GetMachineName
	I0904 19:26:04.805950   13059 buildroot.go:166] provisioning hostname "addons-586464"
	I0904 19:26:04.805970   13059 main.go:141] libmachine: (addons-586464) Calling .GetMachineName
	I0904 19:26:04.806186   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:04.808801   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.809108   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:04.809163   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.809304   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:04.809497   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.809638   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.809769   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:04.809923   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:04.810087   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:04.810103   13059 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-586464 && echo "addons-586464" | sudo tee /etc/hostname
	I0904 19:26:04.929452   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-586464
	
	I0904 19:26:04.929480   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:04.931981   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.932322   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:04.932350   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:04.932558   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:04.932754   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.932990   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:04.933163   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:04.933336   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:04.933543   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:04.933561   13059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-586464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-586464/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-586464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0904 19:26:05.045649   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0904 19:26:05.045688   13059 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19575-5257/.minikube CaCertPath:/home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19575-5257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19575-5257/.minikube}
	I0904 19:26:05.045757   13059 buildroot.go:174] setting up certificates
	I0904 19:26:05.045772   13059 provision.go:84] configureAuth start
	I0904 19:26:05.045790   13059 main.go:141] libmachine: (addons-586464) Calling .GetMachineName
	I0904 19:26:05.046038   13059 main.go:141] libmachine: (addons-586464) Calling .GetIP
	I0904 19:26:05.048245   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.048528   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:05.048563   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.048642   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:05.050687   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.051007   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:05.051035   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.051167   13059 provision.go:143] copyHostCerts
	I0904 19:26:05.051237   13059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19575-5257/.minikube/ca.pem (1082 bytes)
	I0904 19:26:05.051396   13059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19575-5257/.minikube/cert.pem (1123 bytes)
	I0904 19:26:05.051478   13059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19575-5257/.minikube/key.pem (1679 bytes)
	I0904 19:26:05.051543   13059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19575-5257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca-key.pem org=jenkins.addons-586464 san=[127.0.0.1 192.168.39.55 addons-586464 localhost minikube]
	I0904 19:26:05.132907   13059 provision.go:177] copyRemoteCerts
	I0904 19:26:05.132964   13059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0904 19:26:05.132988   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:05.135955   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.136309   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:05.136341   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.136510   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:05.136687   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.136861   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:05.137029   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:05.218743   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0904 19:26:05.247412   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0904 19:26:05.269512   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0904 19:26:05.291492   13059 provision.go:87] duration metric: took 245.703989ms to configureAuth
	I0904 19:26:05.291520   13059 buildroot.go:189] setting minikube options for container-runtime
	I0904 19:26:05.291724   13059 config.go:182] Loaded profile config "addons-586464": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:26:05.291753   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:05.292043   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:05.294566   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.294913   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:05.294941   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.295050   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:05.295253   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.295404   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.295532   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:05.295698   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:05.295858   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:05.295869   13059 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0904 19:26:05.398466   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0904 19:26:05.398487   13059 buildroot.go:70] root file system type: tmpfs
	I0904 19:26:05.398581   13059 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0904 19:26:05.398601   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:05.401465   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.401864   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:05.401891   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.402032   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:05.402206   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.402372   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.402524   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:05.402697   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:05.402843   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:05.402900   13059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0904 19:26:05.518165   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0904 19:26:05.518191   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:05.520345   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.520696   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:05.520724   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:05.520872   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:05.521041   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.521180   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:05.521290   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:05.521450   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:05.521651   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:05.521679   13059 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0904 19:26:08.029142   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0904 19:26:08.029173   13059 main.go:141] libmachine: Checking connection to Docker...
	I0904 19:26:08.029182   13059 main.go:141] libmachine: (addons-586464) Calling .GetURL
	I0904 19:26:08.030509   13059 main.go:141] libmachine: (addons-586464) DBG | Using libvirt version 6000000
	I0904 19:26:08.032452   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.032745   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.032766   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.032965   13059 main.go:141] libmachine: Docker is up and running!
	I0904 19:26:08.032980   13059 main.go:141] libmachine: Reticulating splines...
	I0904 19:26:08.032988   13059 client.go:171] duration metric: took 28.138966255s to LocalClient.Create
	I0904 19:26:08.033028   13059 start.go:167] duration metric: took 28.139052169s to libmachine.API.Create "addons-586464"
	I0904 19:26:08.033041   13059 start.go:293] postStartSetup for "addons-586464" (driver="kvm2")
	I0904 19:26:08.033056   13059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0904 19:26:08.033077   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:08.033301   13059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0904 19:26:08.033323   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:08.035391   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.035722   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.035779   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.035902   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:08.036054   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:08.036228   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:08.036333   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:08.115644   13059 ssh_runner.go:195] Run: cat /etc/os-release
	I0904 19:26:08.119582   13059 info.go:137] Remote host: Buildroot 2023.02.9
	I0904 19:26:08.119604   13059 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-5257/.minikube/addons for local assets ...
	I0904 19:26:08.119689   13059 filesync.go:126] Scanning /home/jenkins/minikube-integration/19575-5257/.minikube/files for local assets ...
	I0904 19:26:08.119718   13059 start.go:296] duration metric: took 86.668005ms for postStartSetup
	I0904 19:26:08.119755   13059 main.go:141] libmachine: (addons-586464) Calling .GetConfigRaw
	I0904 19:26:08.120286   13059 main.go:141] libmachine: (addons-586464) Calling .GetIP
	I0904 19:26:08.122660   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.122989   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.123017   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.123216   13059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/config.json ...
	I0904 19:26:08.123442   13059 start.go:128] duration metric: took 28.247641992s to createHost
	I0904 19:26:08.123464   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:08.125493   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.125773   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.125800   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.125903   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:08.126068   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:08.126217   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:08.126306   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:08.126448   13059 main.go:141] libmachine: Using SSH client type: native
	I0904 19:26:08.126589   13059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0904 19:26:08.126599   13059 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0904 19:26:08.229587   13059 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725477968.202006659
	
	I0904 19:26:08.229615   13059 fix.go:216] guest clock: 1725477968.202006659
	I0904 19:26:08.229623   13059 fix.go:229] Guest: 2024-09-04 19:26:08.202006659 +0000 UTC Remote: 2024-09-04 19:26:08.12345406 +0000 UTC m=+28.348751581 (delta=78.552599ms)
	I0904 19:26:08.229667   13059 fix.go:200] guest clock delta is within tolerance: 78.552599ms
	I0904 19:26:08.229674   13059 start.go:83] releasing machines lock for "addons-586464", held for 28.353947352s
	I0904 19:26:08.229698   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:08.229969   13059 main.go:141] libmachine: (addons-586464) Calling .GetIP
	I0904 19:26:08.232420   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.232730   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.232750   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.232919   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:08.233396   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:08.233564   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:08.233657   13059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0904 19:26:08.233694   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:08.233813   13059 ssh_runner.go:195] Run: cat /version.json
	I0904 19:26:08.233839   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:08.236495   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.236769   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.236826   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.236861   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.237072   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:08.237242   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:08.237258   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:08.237280   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:08.237417   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:08.237477   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:08.237625   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:08.237660   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:08.237740   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:08.237862   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:08.315464   13059 ssh_runner.go:195] Run: systemctl --version
	I0904 19:26:08.341235   13059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0904 19:26:08.346736   13059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0904 19:26:08.346801   13059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0904 19:26:08.363239   13059 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0904 19:26:08.363269   13059 start.go:495] detecting cgroup driver to use...
	I0904 19:26:08.363407   13059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 19:26:08.380972   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0904 19:26:08.391390   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0904 19:26:08.401629   13059 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0904 19:26:08.401696   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0904 19:26:08.411664   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 19:26:08.421555   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0904 19:26:08.433435   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0904 19:26:08.443551   13059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0904 19:26:08.454202   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0904 19:26:08.463613   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0904 19:26:08.473279   13059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0904 19:26:08.482735   13059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0904 19:26:08.491134   13059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0904 19:26:08.499518   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:08.607169   13059 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0904 19:26:08.630420   13059 start.go:495] detecting cgroup driver to use...
	I0904 19:26:08.630496   13059 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0904 19:26:08.653465   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 19:26:08.666906   13059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0904 19:26:08.683531   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0904 19:26:08.696271   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 19:26:08.708914   13059 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0904 19:26:08.738316   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0904 19:26:08.751510   13059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0904 19:26:08.768623   13059 ssh_runner.go:195] Run: which cri-dockerd
	I0904 19:26:08.772036   13059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0904 19:26:08.781370   13059 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0904 19:26:08.796954   13059 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0904 19:26:08.905714   13059 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0904 19:26:09.030319   13059 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0904 19:26:09.030464   13059 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0904 19:26:09.046532   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:09.158887   13059 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 19:26:11.486629   13059 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327709274s)
	I0904 19:26:11.486684   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0904 19:26:11.499147   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 19:26:11.511085   13059 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0904 19:26:11.618029   13059 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0904 19:26:11.742462   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:11.855851   13059 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0904 19:26:11.872056   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0904 19:26:11.884390   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:11.992743   13059 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0904 19:26:12.067179   13059 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0904 19:26:12.067284   13059 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0904 19:26:12.072531   13059 start.go:563] Will wait 60s for crictl version
	I0904 19:26:12.072601   13059 ssh_runner.go:195] Run: which crictl
	I0904 19:26:12.076269   13059 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0904 19:26:12.111088   13059 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0904 19:26:12.111157   13059 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 19:26:12.136743   13059 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0904 19:26:12.159900   13059 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0904 19:26:12.159941   13059 main.go:141] libmachine: (addons-586464) Calling .GetIP
	I0904 19:26:12.162491   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:12.162871   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:12.162898   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:12.163044   13059 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0904 19:26:12.166849   13059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 19:26:12.178479   13059 kubeadm.go:883] updating cluster {Name:addons-586464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-586464 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0904 19:26:12.178582   13059 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0904 19:26:12.178638   13059 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 19:26:12.193059   13059 docker.go:685] Got preloaded images: 
	I0904 19:26:12.193078   13059 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0904 19:26:12.193137   13059 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 19:26:12.202644   13059 ssh_runner.go:195] Run: which lz4
	I0904 19:26:12.206359   13059 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0904 19:26:12.210223   13059 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0904 19:26:12.210249   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0904 19:26:13.231843   13059 docker.go:649] duration metric: took 1.025503446s to copy over tarball
	I0904 19:26:13.231909   13059 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0904 19:26:15.077123   13059 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.845184961s)
	I0904 19:26:15.077158   13059 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0904 19:26:15.109826   13059 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0904 19:26:15.119081   13059 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0904 19:26:15.134771   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:15.237490   13059 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0904 19:26:18.612403   13059 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.374878638s)
	I0904 19:26:18.612492   13059 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0904 19:26:18.630577   13059 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0904 19:26:18.630596   13059 cache_images.go:84] Images are preloaded, skipping loading
	I0904 19:26:18.630616   13059 kubeadm.go:934] updating node { 192.168.39.55 8443 v1.31.0 docker true true} ...
	I0904 19:26:18.630720   13059 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-586464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-586464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0904 19:26:18.630774   13059 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0904 19:26:18.686298   13059 cni.go:84] Creating CNI manager for ""
	I0904 19:26:18.686320   13059 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 19:26:18.686338   13059 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0904 19:26:18.686358   13059 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-586464 NodeName:addons-586464 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0904 19:26:18.686512   13059 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-586464"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0904 19:26:18.686567   13059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0904 19:26:18.696233   13059 binaries.go:44] Found k8s binaries, skipping transfer
	I0904 19:26:18.696290   13059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0904 19:26:18.705100   13059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0904 19:26:18.720971   13059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0904 19:26:18.736661   13059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0904 19:26:18.752436   13059 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0904 19:26:18.756331   13059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.55	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0904 19:26:18.767719   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:18.874309   13059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 19:26:18.893981   13059 certs.go:68] Setting up /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464 for IP: 192.168.39.55
	I0904 19:26:18.894011   13059 certs.go:194] generating shared ca certs ...
	I0904 19:26:18.894031   13059 certs.go:226] acquiring lock for ca certs: {Name:mkd41a59227a9a45f240f3897300ba030d4e74d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:18.894209   13059 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19575-5257/.minikube/ca.key
	I0904 19:26:19.036685   13059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-5257/.minikube/ca.crt ...
	I0904 19:26:19.036713   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/ca.crt: {Name:mk0029a612cb82c8e5bcba54359a7e8009e39327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.036868   13059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-5257/.minikube/ca.key ...
	I0904 19:26:19.036878   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/ca.key: {Name:mk95a80397b89b8809959641944c438725f15164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.036948   13059 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.key
	I0904 19:26:19.110454   13059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.crt ...
	I0904 19:26:19.110480   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.crt: {Name:mk766cfc2da481e28cbaa0043e494c35a0486753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.110623   13059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.key ...
	I0904 19:26:19.110632   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.key: {Name:mkb3f951af0dc29f45f8210e2a166789c29fd2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.110708   13059 certs.go:256] generating profile certs ...
	I0904 19:26:19.110760   13059 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.key
	I0904 19:26:19.110774   13059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt with IP's: []
	I0904 19:26:19.171475   13059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt ...
	I0904 19:26:19.171502   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: {Name:mk3fbfe78649e00440dab542026cff90c29e4988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.171655   13059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.key ...
	I0904 19:26:19.171665   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.key: {Name:mkcae5f51f5ee6aaec6a8fba78e4083bd179d207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.171729   13059 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.key.0f7b9439
	I0904 19:26:19.171747   13059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.crt.0f7b9439 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.55]
	I0904 19:26:19.258749   13059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.crt.0f7b9439 ...
	I0904 19:26:19.258783   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.crt.0f7b9439: {Name:mk1d2e8cd980a9536f51f3a333180b2782e10781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.258940   13059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.key.0f7b9439 ...
	I0904 19:26:19.258952   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.key.0f7b9439: {Name:mkb99202677d27735af35e5ac23349d84b040fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.259030   13059 certs.go:381] copying /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.crt.0f7b9439 -> /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.crt
	I0904 19:26:19.259118   13059 certs.go:385] copying /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.key.0f7b9439 -> /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.key
	I0904 19:26:19.259168   13059 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.key
	I0904 19:26:19.259191   13059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.crt with IP's: []
	I0904 19:26:19.487351   13059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.crt ...
	I0904 19:26:19.487386   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.crt: {Name:mk06e5cf79750ca955a961782370cb85eec9842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.487575   13059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.key ...
	I0904 19:26:19.487588   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.key: {Name:mkaee58e17600eb695a66c2a80929e9ec65a8171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:19.487777   13059 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca-key.pem (1679 bytes)
	I0904 19:26:19.487811   13059 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/ca.pem (1082 bytes)
	I0904 19:26:19.487835   13059 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/cert.pem (1123 bytes)
	I0904 19:26:19.487859   13059 certs.go:484] found cert: /home/jenkins/minikube-integration/19575-5257/.minikube/certs/key.pem (1679 bytes)
	I0904 19:26:19.488433   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0904 19:26:19.512719   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0904 19:26:19.536398   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0904 19:26:19.559376   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0904 19:26:19.592929   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0904 19:26:19.621810   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0904 19:26:19.647110   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0904 19:26:19.669950   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0904 19:26:19.692866   13059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19575-5257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0904 19:26:19.716291   13059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0904 19:26:19.732203   13059 ssh_runner.go:195] Run: openssl version
	I0904 19:26:19.737766   13059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0904 19:26:19.748080   13059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0904 19:26:19.752328   13059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  4 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0904 19:26:19.752388   13059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0904 19:26:19.757907   13059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0904 19:26:19.768365   13059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0904 19:26:19.772155   13059 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0904 19:26:19.772218   13059 kubeadm.go:392] StartCluster: {Name:addons-586464 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-586464 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 19:26:19.772398   13059 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0904 19:26:19.786857   13059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0904 19:26:19.796465   13059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0904 19:26:19.805571   13059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0904 19:26:19.814943   13059 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0904 19:26:19.814959   13059 kubeadm.go:157] found existing configuration files:
	
	I0904 19:26:19.814994   13059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0904 19:26:19.823786   13059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0904 19:26:19.823842   13059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0904 19:26:19.832802   13059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0904 19:26:19.841443   13059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0904 19:26:19.841493   13059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0904 19:26:19.850653   13059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0904 19:26:19.859549   13059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0904 19:26:19.859604   13059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0904 19:26:19.868744   13059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0904 19:26:19.877311   13059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0904 19:26:19.877371   13059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0904 19:26:19.886487   13059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0904 19:26:19.935642   13059 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0904 19:26:19.935710   13059 kubeadm.go:310] [preflight] Running pre-flight checks
	I0904 19:26:20.031432   13059 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0904 19:26:20.031576   13059 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0904 19:26:20.031706   13059 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0904 19:26:20.048746   13059 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0904 19:26:20.051948   13059 out.go:235]   - Generating certificates and keys ...
	I0904 19:26:20.052057   13059 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0904 19:26:20.052166   13059 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0904 19:26:20.355660   13059 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0904 19:26:20.442524   13059 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0904 19:26:20.640026   13059 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0904 19:26:20.845092   13059 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0904 19:26:20.968681   13059 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0904 19:26:20.968788   13059 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-586464 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0904 19:26:21.194512   13059 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0904 19:26:21.194750   13059 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-586464 localhost] and IPs [192.168.39.55 127.0.0.1 ::1]
	I0904 19:26:21.347704   13059 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0904 19:26:21.522155   13059 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0904 19:26:21.785118   13059 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0904 19:26:21.785436   13059 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0904 19:26:22.001069   13059 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0904 19:26:22.044579   13059 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0904 19:26:22.262445   13059 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0904 19:26:22.311099   13059 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0904 19:26:22.452851   13059 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0904 19:26:22.453543   13059 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0904 19:26:22.458316   13059 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0904 19:26:22.460589   13059 out.go:235]   - Booting up control plane ...
	I0904 19:26:22.460720   13059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0904 19:26:22.460828   13059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0904 19:26:22.460892   13059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0904 19:26:22.475436   13059 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0904 19:26:22.481830   13059 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0904 19:26:22.481890   13059 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0904 19:26:22.603174   13059 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0904 19:26:22.603299   13059 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0904 19:26:23.104964   13059 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.816323ms
	I0904 19:26:23.105081   13059 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0904 19:26:28.108476   13059 kubeadm.go:310] [api-check] The API server is healthy after 5.002084241s
	I0904 19:26:28.123043   13059 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0904 19:26:28.142981   13059 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0904 19:26:28.170058   13059 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0904 19:26:28.170590   13059 kubeadm.go:310] [mark-control-plane] Marking the node addons-586464 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0904 19:26:28.180980   13059 kubeadm.go:310] [bootstrap-token] Using token: sj0n78.g5d1yo4quc6u7vfn
	I0904 19:26:28.182493   13059 out.go:235]   - Configuring RBAC rules ...
	I0904 19:26:28.182636   13059 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0904 19:26:28.190365   13059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0904 19:26:28.198497   13059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0904 19:26:28.201474   13059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0904 19:26:28.209141   13059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0904 19:26:28.212339   13059 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0904 19:26:28.516275   13059 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0904 19:26:28.959207   13059 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0904 19:26:29.514888   13059 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0904 19:26:29.516467   13059 kubeadm.go:310] 
	I0904 19:26:29.516525   13059 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0904 19:26:29.516532   13059 kubeadm.go:310] 
	I0904 19:26:29.516603   13059 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0904 19:26:29.516629   13059 kubeadm.go:310] 
	I0904 19:26:29.516683   13059 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0904 19:26:29.516768   13059 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0904 19:26:29.516810   13059 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0904 19:26:29.516817   13059 kubeadm.go:310] 
	I0904 19:26:29.516857   13059 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0904 19:26:29.516863   13059 kubeadm.go:310] 
	I0904 19:26:29.516899   13059 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0904 19:26:29.516905   13059 kubeadm.go:310] 
	I0904 19:26:29.516948   13059 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0904 19:26:29.517070   13059 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0904 19:26:29.517170   13059 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0904 19:26:29.517201   13059 kubeadm.go:310] 
	I0904 19:26:29.517319   13059 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0904 19:26:29.517419   13059 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0904 19:26:29.517428   13059 kubeadm.go:310] 
	I0904 19:26:29.517494   13059 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sj0n78.g5d1yo4quc6u7vfn \
	I0904 19:26:29.517622   13059 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bdb4724ce96206daae28efa33387e54ad8c864446fc57c505803f36cb1351893 \
	I0904 19:26:29.517665   13059 kubeadm.go:310] 	--control-plane 
	I0904 19:26:29.517677   13059 kubeadm.go:310] 
	I0904 19:26:29.517808   13059 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0904 19:26:29.517821   13059 kubeadm.go:310] 
	I0904 19:26:29.517886   13059 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sj0n78.g5d1yo4quc6u7vfn \
	I0904 19:26:29.517983   13059 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bdb4724ce96206daae28efa33387e54ad8c864446fc57c505803f36cb1351893 
	I0904 19:26:29.518835   13059 kubeadm.go:310] W0904 19:26:19.905517    1502 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 19:26:29.519176   13059 kubeadm.go:310] W0904 19:26:19.906414    1502 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0904 19:26:29.519285   13059 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0904 19:26:29.519318   13059 cni.go:84] Creating CNI manager for ""
	I0904 19:26:29.519333   13059 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0904 19:26:29.522523   13059 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0904 19:26:29.523817   13059 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0904 19:26:29.534074   13059 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0904 19:26:29.553298   13059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0904 19:26:29.553360   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:29.553409   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-586464 minikube.k8s.io/updated_at=2024_09_04T19_26_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af minikube.k8s.io/name=addons-586464 minikube.k8s.io/primary=true
	I0904 19:26:29.666054   13059 ops.go:34] apiserver oom_adj: -16
	I0904 19:26:29.666221   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:30.166491   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:30.667119   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:31.167033   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:31.666371   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:32.166580   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:32.667052   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:33.167265   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:33.667226   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:34.166382   13059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0904 19:26:34.237969   13059 kubeadm.go:1113] duration metric: took 4.684675395s to wait for elevateKubeSystemPrivileges
	I0904 19:26:34.238006   13059 kubeadm.go:394] duration metric: took 14.465793211s to StartCluster
	I0904 19:26:34.238024   13059 settings.go:142] acquiring lock: {Name:mk39c6cbd569e6bea3e7a40d07e315c481f4439c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:34.238145   13059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19575-5257/kubeconfig
	I0904 19:26:34.238464   13059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19575-5257/kubeconfig: {Name:mkbf449ef0118660f4a4f7a2edc835096b91d366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0904 19:26:34.238624   13059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0904 19:26:34.238639   13059 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0904 19:26:34.238712   13059 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0904 19:26:34.238797   13059 addons.go:69] Setting yakd=true in profile "addons-586464"
	I0904 19:26:34.238816   13059 addons.go:69] Setting inspektor-gadget=true in profile "addons-586464"
	I0904 19:26:34.238832   13059 addons.go:234] Setting addon yakd=true in "addons-586464"
	I0904 19:26:34.238833   13059 addons.go:69] Setting storage-provisioner=true in profile "addons-586464"
	I0904 19:26:34.238854   13059 addons.go:234] Setting addon inspektor-gadget=true in "addons-586464"
	I0904 19:26:34.238863   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.238871   13059 addons.go:69] Setting metrics-server=true in profile "addons-586464"
	I0904 19:26:34.238887   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.238890   13059 addons.go:69] Setting cloud-spanner=true in profile "addons-586464"
	I0904 19:26:34.238894   13059 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-586464"
	I0904 19:26:34.238888   13059 addons.go:69] Setting gcp-auth=true in profile "addons-586464"
	I0904 19:26:34.238921   13059 addons.go:234] Setting addon metrics-server=true in "addons-586464"
	I0904 19:26:34.238927   13059 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-586464"
	I0904 19:26:34.238931   13059 addons.go:234] Setting addon cloud-spanner=true in "addons-586464"
	I0904 19:26:34.238933   13059 addons.go:69] Setting default-storageclass=true in profile "addons-586464"
	I0904 19:26:34.238954   13059 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-586464"
	I0904 19:26:34.238965   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.238966   13059 addons.go:69] Setting volumesnapshots=true in profile "addons-586464"
	I0904 19:26:34.238981   13059 addons.go:234] Setting addon volumesnapshots=true in "addons-586464"
	I0904 19:26:34.238996   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.238854   13059 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-586464"
	I0904 19:26:34.238873   13059 addons.go:234] Setting addon storage-provisioner=true in "addons-586464"
	I0904 19:26:34.239296   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239303   13059 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-586464"
	I0904 19:26:34.239310   13059 addons.go:69] Setting helm-tiller=true in profile "addons-586464"
	I0904 19:26:34.239317   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239319   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.239324   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.239335   13059 addons.go:234] Setting addon helm-tiller=true in "addons-586464"
	I0904 19:26:34.239337   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239356   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.239376   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.238926   13059 mustload.go:65] Loading cluster: addons-586464
	I0904 19:26:34.239302   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239403   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.238872   13059 config.go:182] Loaded profile config "addons-586464": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:26:34.238882   13059 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-586464"
	I0904 19:26:34.238888   13059 addons.go:69] Setting volcano=true in profile "addons-586464"
	I0904 19:26:34.239442   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239459   13059 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-586464"
	I0904 19:26:34.239460   13059 addons.go:234] Setting addon volcano=true in "addons-586464"
	I0904 19:26:34.239489   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239530   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239552   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.238880   13059 addons.go:69] Setting registry=true in profile "addons-586464"
	I0904 19:26:34.239609   13059 addons.go:234] Setting addon registry=true in "addons-586464"
	I0904 19:26:34.238959   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.239321   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239629   13059 addons.go:69] Setting ingress-dns=true in profile "addons-586464"
	I0904 19:26:34.239639   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239649   13059 addons.go:234] Setting addon ingress-dns=true in "addons-586464"
	I0904 19:26:34.239625   13059 addons.go:69] Setting ingress=true in profile "addons-586464"
	I0904 19:26:34.239652   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239662   13059 config.go:182] Loaded profile config "addons-586464": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:26:34.239668   13059 addons.go:234] Setting addon ingress=true in "addons-586464"
	I0904 19:26:34.239666   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239731   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239748   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239804   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.239948   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.239980   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.239989   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.239995   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.240023   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.240046   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.240102   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.240115   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.240121   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.240133   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.240264   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.240677   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.240831   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.240295   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.241749   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.241773   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.245024   13059 out.go:177] * Verifying Kubernetes components...
	I0904 19:26:34.247098   13059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0904 19:26:34.260239   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32773
	I0904 19:26:34.260936   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.261603   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.261630   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.261971   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.262276   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0904 19:26:34.262783   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.263285   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I0904 19:26:34.263393   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.263417   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.263631   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.263663   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.263707   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.263852   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.264209   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.264236   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.264365   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.264417   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.265259   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.265444   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.265736   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.265765   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.265840   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.265868   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.268370   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0904 19:26:34.268720   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.269237   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.269257   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.269584   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.270111   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.270144   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.271035   13059 addons.go:234] Setting addon default-storageclass=true in "addons-586464"
	I0904 19:26:34.271068   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.271435   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.271453   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.271696   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0904 19:26:34.277970   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.278529   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.278559   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.285984   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.286697   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.286742   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.289138   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36697
	I0904 19:26:34.290058   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.290379   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0904 19:26:34.290667   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.290679   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.290736   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.291007   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.291148   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.291159   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.291575   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.291605   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.293451   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0904 19:26:34.293872   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.294041   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.294463   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.294490   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.295135   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.295157   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.295562   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.296099   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.296128   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.296509   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0904 19:26:34.296905   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.297367   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.297387   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.297746   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.297934   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.299636   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I0904 19:26:34.301320   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0904 19:26:34.301817   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.302356   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.302377   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.303317   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.303881   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.303922   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.306800   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I0904 19:26:34.307324   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.307959   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.307976   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.308547   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.309107   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.309142   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.310280   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0904 19:26:34.311240   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0904 19:26:34.311279   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.311731   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.311807   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0904 19:26:34.312079   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.312333   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.312350   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.312564   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.312590   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.313086   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0904 19:26:34.313090   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.313117   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.313253   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.313268   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.313334   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.313677   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.313676   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.313903   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.314072   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.314123   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.315035   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.315059   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.315574   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.315903   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.316297   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.316830   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.317206   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.317242   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.317274   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.317309   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.317455   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.317852   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.318110   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.318534   13059 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0904 19:26:34.319446   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0904 19:26:34.320304   13059 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0904 19:26:34.320321   13059 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0904 19:26:34.320341   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.321066   13059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0904 19:26:34.321084   13059 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0904 19:26:34.321105   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.321762   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.324826   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.324858   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.325338   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.325366   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.325935   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.325985   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.326200   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.326216   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.326245   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.326295   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I0904 19:26:34.326425   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.326517   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.326710   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.326804   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.326844   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.327020   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.327067   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.327450   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.328189   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.328211   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.328598   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.328943   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.330503   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.331229   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44689
	I0904 19:26:34.331677   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.332158   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.332182   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.332222   13059 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0904 19:26:34.332550   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.332697   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.333690   13059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0904 19:26:34.333711   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0904 19:26:34.333727   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.335950   13059 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-586464"
	I0904 19:26:34.335994   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:34.336369   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.336401   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.336956   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.337646   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.337668   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.337860   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.338012   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.338162   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.338295   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.342761   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0904 19:26:34.343237   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.343751   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.343770   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.344125   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.344639   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.344667   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.345990   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I0904 19:26:34.346445   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.346943   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.346960   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.347291   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.347489   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.350040   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0904 19:26:34.350164   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.350985   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.351875   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.351893   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.352247   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.352797   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.352833   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.353010   13059 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0904 19:26:34.354492   13059 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0904 19:26:34.354512   13059 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0904 19:26:34.354531   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.357864   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.357903   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0904 19:26:34.358023   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0904 19:26:34.358112   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
	I0904 19:26:34.358317   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.358342   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.358678   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.358769   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.359319   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.359345   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.359406   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.359725   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.359772   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.360154   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.360236   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.360590   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.361485   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.361506   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.361569   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.361888   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.362025   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.362597   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.363305   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0904 19:26:34.363818   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.363901   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.364176   13059 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0904 19:26:34.364191   13059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0904 19:26:34.364208   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.364350   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.364361   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.364487   13059 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0904 19:26:34.364752   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.365022   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.365928   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.365944   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.366486   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.366658   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.367405   13059 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 19:26:34.367427   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0904 19:26:34.367444   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.367852   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0904 19:26:34.367869   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0904 19:26:34.368756   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.368802   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.368867   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.369335   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.369390   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.369453   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.369469   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.369566   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.369576   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.369868   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.370261   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.370274   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0904 19:26:34.370323   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.370482   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.370634   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.370654   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.370704   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.370782   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.370889   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.371390   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.372053   13059 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0904 19:26:34.372320   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.372459   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.372477   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.373148   13059 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0904 19:26:34.373164   13059 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0904 19:26:34.373181   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.373778   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0904 19:26:34.374596   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0904 19:26:34.374622   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.374645   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.374665   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.374598   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.374712   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.374691   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.375183   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.375276   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.376183   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.376200   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.376251   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.376431   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.376602   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:34.376632   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:34.376910   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.377053   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.377304   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.377332   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.377335   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.377586   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.377670   13059 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0904 19:26:34.377772   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.377934   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.378036   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.378095   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0904 19:26:34.378873   13059 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 19:26:34.378888   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0904 19:26:34.378903   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.378877   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.378904   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32911
	I0904 19:26:34.380235   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0904 19:26:34.380308   13059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0904 19:26:34.381734   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0904 19:26:34.381811   13059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 19:26:34.382920   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0904 19:26:34.382928   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.382930   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0904 19:26:34.382966   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.383222   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.383243   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.383315   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.383922   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.383925   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.383483   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.383892   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.383991   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.383944   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.384120   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0904 19:26:34.384158   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.384332   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.384612   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.384612   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43263
	I0904 19:26:34.384648   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.384651   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.384784   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.384796   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.384818   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.384965   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.385210   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.385217   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.385342   13059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 19:26:34.385548   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.385728   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.385741   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.386079   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.386346   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:34.386468   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0904 19:26:34.386836   13059 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 19:26:34.386850   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0904 19:26:34.386865   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.387053   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.388012   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.388610   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0904 19:26:34.388674   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.388801   13059 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0904 19:26:34.388832   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.389603   13059 out.go:177]   - Using image docker.io/registry:2.8.3
	I0904 19:26:34.390462   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.390840   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.390855   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.391019   13059 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0904 19:26:34.391027   13059 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0904 19:26:34.391056   13059 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0904 19:26:34.391128   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.391152   13059 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0904 19:26:34.391164   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0904 19:26:34.391176   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.391642   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.391827   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.392015   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.392253   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0904 19:26:34.392265   13059 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0904 19:26:34.392268   13059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0904 19:26:34.392357   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.392424   13059 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 19:26:34.392431   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0904 19:26:34.392441   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.393887   13059 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0904 19:26:34.393902   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0904 19:26:34.393916   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.393964   13059 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0904 19:26:34.395314   13059 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0904 19:26:34.395778   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.395858   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.396160   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.396178   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.396378   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.396549   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.397435   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.397479   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.397554   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.397570   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.397633   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.397684   13059 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0904 19:26:34.397697   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0904 19:26:34.397713   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.397729   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.397743   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.398024   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.398082   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.398107   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.398400   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.398438   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.398450   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.398466   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.398559   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.398588   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.398668   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.398740   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.398767   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.398904   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.399157   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.399292   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.400943   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.401375   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.401394   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.401666   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.401858   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.401996   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.402078   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:34.403288   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0904 19:26:34.425921   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:34.426414   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:34.426432   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:34.426757   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:34.426962   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	W0904 19:26:34.427860   13059 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38562->192.168.39.55:22: read: connection reset by peer
	I0904 19:26:34.427887   13059 retry.go:31] will retry after 229.067052ms: ssh: handshake failed: read tcp 192.168.39.1:38562->192.168.39.55:22: read: connection reset by peer
	W0904 19:26:34.427949   13059 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38576->192.168.39.55:22: read: connection reset by peer
	I0904 19:26:34.427957   13059 retry.go:31] will retry after 177.265459ms: ssh: handshake failed: read tcp 192.168.39.1:38576->192.168.39.55:22: read: connection reset by peer
	I0904 19:26:34.428680   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:34.430676   13059 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0904 19:26:34.431891   13059 out.go:177]   - Using image docker.io/busybox:stable
	I0904 19:26:34.433177   13059 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 19:26:34.433194   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0904 19:26:34.433209   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:34.435951   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.436391   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:34.436421   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:34.436561   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:34.436736   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:34.436881   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:34.437018   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	W0904 19:26:34.440362   13059 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38588->192.168.39.55:22: read: connection reset by peer
	I0904 19:26:34.440384   13059 retry.go:31] will retry after 307.461233ms: ssh: handshake failed: read tcp 192.168.39.1:38588->192.168.39.55:22: read: connection reset by peer
	I0904 19:26:34.746081   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0904 19:26:34.866943   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0904 19:26:34.988305   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0904 19:26:34.999057   13059 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0904 19:26:34.999078   13059 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0904 19:26:35.007738   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0904 19:26:35.112091   13059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0904 19:26:35.112127   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0904 19:26:35.199296   13059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0904 19:26:35.199320   13059 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0904 19:26:35.236951   13059 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0904 19:26:35.236978   13059 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0904 19:26:35.269300   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0904 19:26:35.319160   13059 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0904 19:26:35.319182   13059 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0904 19:26:35.400471   13059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0904 19:26:35.400499   13059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0904 19:26:35.422024   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0904 19:26:35.504501   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0904 19:26:35.541363   13059 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0904 19:26:35.541383   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0904 19:26:35.555186   13059 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.316528181s)
	I0904 19:26:35.555199   13059 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.308048771s)
	I0904 19:26:35.555334   13059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0904 19:26:35.555377   13059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0904 19:26:35.564671   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0904 19:26:35.564697   13059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0904 19:26:35.582716   13059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0904 19:26:35.582738   13059 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0904 19:26:35.672918   13059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0904 19:26:35.672942   13059 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0904 19:26:35.710483   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0904 19:26:35.717900   13059 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0904 19:26:35.717925   13059 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0904 19:26:35.764703   13059 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0904 19:26:35.764731   13059 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0904 19:26:35.804778   13059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0904 19:26:35.804808   13059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0904 19:26:35.852122   13059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 19:26:35.852148   13059 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0904 19:26:35.854518   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0904 19:26:35.932350   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0904 19:26:35.932382   13059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0904 19:26:35.942615   13059 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0904 19:26:35.942644   13059 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0904 19:26:35.945626   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0904 19:26:35.995958   13059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0904 19:26:35.995989   13059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0904 19:26:36.005596   13059 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0904 19:26:36.005621   13059 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0904 19:26:36.075580   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0904 19:26:36.198489   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0904 19:26:36.198518   13059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0904 19:26:36.256277   13059 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0904 19:26:36.256310   13059 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0904 19:26:36.339709   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0904 19:26:36.339734   13059 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0904 19:26:36.387675   13059 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0904 19:26:36.387695   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0904 19:26:36.453003   13059 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0904 19:26:36.453032   13059 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0904 19:26:36.472760   13059 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 19:26:36.472782   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0904 19:26:36.491090   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0904 19:26:36.491122   13059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0904 19:26:36.507580   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0904 19:26:36.520544   13059 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0904 19:26:36.520568   13059 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0904 19:26:36.593526   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.847407594s)
	I0904 19:26:36.593583   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:36.593595   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:36.593859   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:36.593959   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:36.593978   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:36.593995   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:36.594005   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:36.594440   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:36.594455   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:36.594480   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:36.599613   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:36.599637   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:36.599916   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:36.599941   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:36.657038   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 19:26:36.677115   13059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0904 19:26:36.677143   13059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0904 19:26:36.731915   13059 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 19:26:36.731941   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0904 19:26:36.866431   13059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0904 19:26:36.866451   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0904 19:26:36.972924   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0904 19:26:37.112579   13059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0904 19:26:37.112607   13059 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0904 19:26:37.376397   13059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0904 19:26:37.376417   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0904 19:26:37.519692   13059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0904 19:26:37.519714   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0904 19:26:37.865456   13059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 19:26:37.865490   13059 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0904 19:26:37.979565   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0904 19:26:39.913564   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.925218832s)
	I0904 19:26:39.913613   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:39.913636   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:39.913647   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.046671158s)
	I0904 19:26:39.913655   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.905889137s)
	I0904 19:26:39.913682   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:39.913700   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:39.913682   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:39.913754   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:39.914025   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:39.914031   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:39.914031   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:39.914056   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:39.914074   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:39.914076   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:39.914098   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:39.914080   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:39.914121   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:39.914129   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:39.914134   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:39.914083   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:39.914181   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:39.914112   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:39.914215   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:39.914323   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:39.914352   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:39.914448   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:39.914458   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:39.914485   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:39.914502   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:39.914485   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:39.914527   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:41.379774   13059 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0904 19:26:41.379810   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:41.382970   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:41.383378   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:41.383418   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:41.383590   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:41.383771   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:41.383944   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:41.384113   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:41.925695   13059 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0904 19:26:42.145987   13059 addons.go:234] Setting addon gcp-auth=true in "addons-586464"
	I0904 19:26:42.146032   13059 host.go:66] Checking if "addons-586464" exists ...
	I0904 19:26:42.146354   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:42.146380   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:42.162123   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I0904 19:26:42.162564   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:42.163075   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:42.163103   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:42.163428   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:42.163967   13059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:26:42.163995   13059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:26:42.178778   13059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I0904 19:26:42.179382   13059 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:26:42.179933   13059 main.go:141] libmachine: Using API Version  1
	I0904 19:26:42.179962   13059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:26:42.180294   13059 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:26:42.180953   13059 main.go:141] libmachine: (addons-586464) Calling .GetState
	I0904 19:26:42.182456   13059 main.go:141] libmachine: (addons-586464) Calling .DriverName
	I0904 19:26:42.182682   13059 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0904 19:26:42.182703   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHHostname
	I0904 19:26:42.185242   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:42.185625   13059 main.go:141] libmachine: (addons-586464) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:04:2c", ip: ""} in network mk-addons-586464: {Iface:virbr1 ExpiryTime:2024-09-04 20:25:54 +0000 UTC Type:0 Mac:52:54:00:bb:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:addons-586464 Clientid:01:52:54:00:bb:04:2c}
	I0904 19:26:42.185649   13059 main.go:141] libmachine: (addons-586464) DBG | domain addons-586464 has defined IP address 192.168.39.55 and MAC address 52:54:00:bb:04:2c in network mk-addons-586464
	I0904 19:26:42.185828   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHPort
	I0904 19:26:42.185991   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHKeyPath
	I0904 19:26:42.186151   13059 main.go:141] libmachine: (addons-586464) Calling .GetSSHUsername
	I0904 19:26:42.186279   13059 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/addons-586464/id_rsa Username:docker}
	I0904 19:26:44.441915   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.172580252s)
	I0904 19:26:44.441962   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:44.441971   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:44.442312   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:44.442330   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:44.442339   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:44.442352   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:44.442391   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:44.442641   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:44.442652   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:44.442673   13059 addons.go:475] Verifying addon ingress=true in "addons-586464"
	I0904 19:26:44.444200   13059 out.go:177] * Verifying ingress addon...
	I0904 19:26:44.445967   13059 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0904 19:26:44.458033   13059 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0904 19:26:44.458052   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:44.956785   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:45.535456   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:46.098317   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:46.472021   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:46.989632   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:47.166749   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.744687457s)
	I0904 19:26:47.166781   13059 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.611434712s)
	I0904 19:26:47.166799   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.166811   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.166762   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.662229279s)
	I0904 19:26:47.166818   13059 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.61142611s)
	I0904 19:26:47.166853   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.166799   13059 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0904 19:26:47.166865   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.166866   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.456358636s)
	I0904 19:26:47.166889   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.166905   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.166976   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.312435091s)
	I0904 19:26:47.167012   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.167032   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.167030   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.22138286s)
	I0904 19:26:47.167173   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.167188   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.167275   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.659664492s)
	I0904 19:26:47.167295   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.167304   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.167391   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.51032187s)
	W0904 19:26:47.167424   13059 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 19:26:47.167444   13059 retry.go:31] will retry after 349.398976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0904 19:26:47.167522   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.194569911s)
	I0904 19:26:47.167540   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.167550   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.167217   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.091609447s)
	I0904 19:26:47.167575   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.167585   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.167839   13059 node_ready.go:35] waiting up to 6m0s for node "addons-586464" to be "Ready" ...
	I0904 19:26:47.168164   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.168166   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168181   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168190   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168192   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168197   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.168203   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168212   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168222   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168229   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168230   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.168237   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.168251   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168258   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168265   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168273   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.168213   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168310   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.168342   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.168359   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168366   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168376   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168382   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.168416   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.168429   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.168446   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168453   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168460   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168466   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.168504   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.168511   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.168517   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.168524   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.170339   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.170405   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.170419   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.170427   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.170435   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.170498   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.170530   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.170537   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.170603   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.170611   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.170806   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.170820   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.170838   13059 addons.go:475] Verifying addon registry=true in "addons-586464"
	I0904 19:26:47.170965   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.170975   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.171066   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.171085   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.171106   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.171113   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.171163   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.171177   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.171221   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.171243   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.171254   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.172029   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:47.172060   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.172069   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.172078   13059 addons.go:475] Verifying addon metrics-server=true in "addons-586464"
	I0904 19:26:47.173830   13059 out.go:177] * Verifying registry addon...
	I0904 19:26:47.173830   13059 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-586464 service yakd-dashboard -n yakd-dashboard
	
	I0904 19:26:47.175856   13059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0904 19:26:47.206509   13059 node_ready.go:49] node "addons-586464" has status "Ready":"True"
	I0904 19:26:47.206529   13059 node_ready.go:38] duration metric: took 38.669145ms for node "addons-586464" to be "Ready" ...
	I0904 19:26:47.206537   13059 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 19:26:47.217931   13059 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0904 19:26:47.217955   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:47.338919   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:47.338942   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:47.339231   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:47.339244   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:47.363402   13059 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wf45w" in "kube-system" namespace to be "Ready" ...
	I0904 19:26:47.460671   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:47.517474   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0904 19:26:47.670326   13059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-586464" context rescaled to 1 replicas
	I0904 19:26:47.689659   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:47.972495   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:48.017359   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.037724415s)
	I0904 19:26:48.017409   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:48.017413   13059 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.834709234s)
	I0904 19:26:48.017423   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:48.017793   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:48.017807   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:48.017815   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:48.017822   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:48.018019   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:48.018034   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:48.018044   13059 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-586464"
	I0904 19:26:48.019325   13059 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0904 19:26:48.019333   13059 out.go:177] * Verifying csi-hostpath-driver addon...
	I0904 19:26:48.021011   13059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0904 19:26:48.021573   13059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0904 19:26:48.022206   13059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0904 19:26:48.022223   13059 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0904 19:26:48.061203   13059 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0904 19:26:48.061224   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:48.069374   13059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0904 19:26:48.069402   13059 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0904 19:26:48.148378   13059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 19:26:48.148405   13059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0904 19:26:48.183429   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:48.270681   13059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0904 19:26:48.488363   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:48.633760   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:48.679158   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:48.951587   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:49.026389   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:49.180681   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:49.369637   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-wf45w" in "kube-system" namespace has status "Ready":"False"
	I0904 19:26:49.453660   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:49.559257   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:49.587424   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.069900296s)
	I0904 19:26:49.587475   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:49.587492   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:49.587769   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:49.587791   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:49.587807   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:49.587816   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:49.588060   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:49.588105   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:49.588118   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:49.631107   13059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.360321145s)
	I0904 19:26:49.631161   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:49.631178   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:49.631421   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:49.631456   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:49.631471   13059 main.go:141] libmachine: Making call to close driver server
	I0904 19:26:49.631480   13059 main.go:141] libmachine: (addons-586464) Calling .Close
	I0904 19:26:49.631739   13059 main.go:141] libmachine: Successfully made call to close driver server
	I0904 19:26:49.631759   13059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0904 19:26:49.631774   13059 main.go:141] libmachine: (addons-586464) DBG | Closing plugin on server side
	I0904 19:26:49.633776   13059 addons.go:475] Verifying addon gcp-auth=true in "addons-586464"
	I0904 19:26:49.635393   13059 out.go:177] * Verifying gcp-auth addon...
	I0904 19:26:49.637916   13059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0904 19:26:49.658461   13059 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 19:26:49.755916   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:49.950132   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:50.026330   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:50.179089   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:50.450859   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:50.527774   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:50.742530   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:50.950681   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:51.026466   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:51.179917   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:51.450814   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:51.526156   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:51.679614   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:51.872183   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-wf45w" in "kube-system" namespace has status "Ready":"False"
	I0904 19:26:51.951026   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:52.026312   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:52.179340   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:52.452110   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:52.527888   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:52.996303   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:52.996922   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:53.092974   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:53.192849   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:53.370475   13059 pod_ready.go:98] pod "coredns-6f6b679f8f-wf45w" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:53 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.55 HostIPs:[{IP:192.168.39.
55}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-04 19:26:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-04 19:26:35 +0000 UTC,FinishedAt:2024-09-04 19:26:52 +0000 UTC,ContainerID:docker://8ac34cb0b3ca279a71c96243030917e4f9e46c72ac5491c6ab995b731afd3def,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://8ac34cb0b3ca279a71c96243030917e4f9e46c72ac5491c6ab995b731afd3def Started:0xc0029aac10 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0028eb110} {Name:kube-api-access-rjdgm MountPath:/var/run/secrets/kubernetes.io/serviceaccou
nt ReadOnly:true RecursiveReadOnly:0xc0028eb120}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0904 19:26:53.370516   13059 pod_ready.go:82] duration metric: took 6.007086074s for pod "coredns-6f6b679f8f-wf45w" in "kube-system" namespace to be "Ready" ...
	E0904 19:26:53.370533   13059 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-wf45w" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:53 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-04 19:26:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.55 HostIPs:[{IP:192.168.39.55}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-04 19:26:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-04 19:26:35 +0000 UTC,FinishedAt:2024-09-04 19:26:52 +0000 UTC,ContainerID:docker://8ac34cb0b3ca279a71c96243030917e4f9e46c72ac5491c6ab995b731afd3def,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://8ac34cb0b3ca279a71c96243030917e4f9e46c72ac5491c6ab995b731afd3def Started:0xc0029aac10 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0028eb110} {Name:kube-api-access-rjdgm MountPath:/var/run/secre
ts/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0028eb120}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0904 19:26:53.370546   13059 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace to be "Ready" ...
	I0904 19:26:53.451152   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:53.526627   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:53.680295   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:53.949835   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:54.026314   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:54.179587   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:54.535103   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:54.536827   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:54.679838   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:54.951952   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:55.026365   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:55.179587   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:55.376832   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"False"
	I0904 19:26:55.450302   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:55.528290   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:55.680429   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:55.949903   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:56.027282   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:56.179868   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:56.450293   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:56.526566   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:56.680254   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:56.951505   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:57.026805   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:57.180018   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:57.376872   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"False"
	I0904 19:26:57.450696   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:57.526141   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:57.679673   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:57.950362   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:58.025186   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:58.179812   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:58.450351   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:58.525562   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:58.679045   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:58.950934   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:59.025951   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:59.180023   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:59.451493   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:26:59.526046   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:26:59.678843   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:26:59.877900   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"False"
	I0904 19:26:59.951120   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:00.026359   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:00.179857   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:00.450698   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:00.525883   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:00.680519   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:00.950679   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:01.277290   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:01.278050   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:01.450695   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:01.526544   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:01.680039   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:01.951637   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:02.026110   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:02.180007   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:02.378353   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:02.450223   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:02.526907   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:02.680198   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:02.951537   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:03.217971   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:03.218444   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:03.451572   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:03.527725   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:03.688743   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:03.952019   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:04.029900   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:04.179420   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:04.451607   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:04.526901   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:04.681977   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:04.877525   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:04.951946   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:05.027224   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:05.181068   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:05.450323   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:05.526592   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:05.679364   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:05.952453   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:06.026459   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:06.179433   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:06.450712   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:06.525663   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:06.679196   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:06.950953   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:07.025847   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:07.180114   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:07.376755   13059 pod_ready.go:103] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:07.457944   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:07.554978   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:07.679327   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:07.950413   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:08.051508   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:08.179441   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:08.451302   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:08.526464   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:08.680057   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:08.879330   13059 pod_ready.go:93] pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:08.879358   13059 pod_ready.go:82] duration metric: took 15.508800078s for pod "coredns-6f6b679f8f-zfdmz" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.879368   13059 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.886357   13059 pod_ready.go:93] pod "etcd-addons-586464" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:08.886387   13059 pod_ready.go:82] duration metric: took 7.010962ms for pod "etcd-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.886399   13059 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.896795   13059 pod_ready.go:93] pod "kube-apiserver-addons-586464" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:08.896827   13059 pod_ready.go:82] duration metric: took 10.419125ms for pod "kube-apiserver-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.896841   13059 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.906370   13059 pod_ready.go:93] pod "kube-controller-manager-addons-586464" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:08.906403   13059 pod_ready.go:82] duration metric: took 9.553155ms for pod "kube-controller-manager-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.906421   13059 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mjv7n" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.916198   13059 pod_ready.go:93] pod "kube-proxy-mjv7n" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:08.916231   13059 pod_ready.go:82] duration metric: took 9.802066ms for pod "kube-proxy-mjv7n" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.916246   13059 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:08.951749   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:09.026864   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:09.179838   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:09.274724   13059 pod_ready.go:93] pod "kube-scheduler-addons-586464" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:09.274758   13059 pod_ready.go:82] duration metric: took 358.500045ms for pod "kube-scheduler-addons-586464" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:09.274772   13059 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:09.452232   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:09.526323   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:09.678805   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:09.951003   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:10.026368   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:10.179806   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:10.450047   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:10.525330   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:10.681151   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:10.949977   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:11.026213   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:11.179320   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:11.280339   13059 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:11.450231   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:11.525412   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:11.679541   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:11.950623   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:12.025724   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:12.180062   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:12.449526   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:12.526001   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:12.680970   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:12.950258   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:13.026882   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:13.180163   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:13.281441   13059 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:13.451980   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:13.526606   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:13.679159   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:13.949992   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:14.026250   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:14.179680   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:14.450114   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:14.525995   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:14.887460   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:14.951245   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:15.027099   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:15.180538   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:15.282295   13059 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:15.555050   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:15.556186   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:15.679944   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0904 19:27:15.950775   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:16.026262   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:16.180141   13059 kapi.go:107] duration metric: took 29.004279036s to wait for kubernetes.io/minikube-addons=registry ...
	I0904 19:27:16.450203   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:16.526698   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:16.950222   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:17.026991   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:17.450691   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:17.525914   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:17.786238   13059 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:18.003803   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:18.104453   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:18.450276   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:18.525777   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:18.950604   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:19.050959   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:19.450823   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:19.662554   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:19.950519   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:20.039317   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:20.282629   13059 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace has status "Ready":"False"
	I0904 19:27:20.451609   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:20.525967   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:20.953966   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:21.055134   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:21.450786   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:21.534220   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:21.951878   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:22.026664   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:22.281568   13059 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:22.281593   13059 pod_ready.go:82] duration metric: took 13.006814035s for pod "metrics-server-84c5f94fbc-jtggb" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:22.281602   13059 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lrw9n" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:22.287500   13059 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lrw9n" in "kube-system" namespace has status "Ready":"True"
	I0904 19:27:22.287520   13059 pod_ready.go:82] duration metric: took 5.910857ms for pod "nvidia-device-plugin-daemonset-lrw9n" in "kube-system" namespace to be "Ready" ...
	I0904 19:27:22.287539   13059 pod_ready.go:39] duration metric: took 35.080991784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0904 19:27:22.287557   13059 api_server.go:52] waiting for apiserver process to appear ...
	I0904 19:27:22.287601   13059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 19:27:22.308034   13059 api_server.go:72] duration metric: took 48.069366147s to wait for apiserver process to appear ...
	I0904 19:27:22.308063   13059 api_server.go:88] waiting for apiserver healthz status ...
	I0904 19:27:22.308083   13059 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I0904 19:27:22.314114   13059 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I0904 19:27:22.315138   13059 api_server.go:141] control plane version: v1.31.0
	I0904 19:27:22.315161   13059 api_server.go:131] duration metric: took 7.091571ms to wait for apiserver health ...
	I0904 19:27:22.315168   13059 system_pods.go:43] waiting for kube-system pods to appear ...
	I0904 19:27:22.325458   13059 system_pods.go:59] 18 kube-system pods found
	I0904 19:27:22.325486   13059 system_pods.go:61] "coredns-6f6b679f8f-zfdmz" [dddad47f-b0fb-416e-9ed0-acba231709c2] Running
	I0904 19:27:22.325494   13059 system_pods.go:61] "csi-hostpath-attacher-0" [1a282ee3-ff35-423e-834a-6c783f9da000] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 19:27:22.325500   13059 system_pods.go:61] "csi-hostpath-resizer-0" [251b5645-da25-4994-85ec-31967c4bc1d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 19:27:22.325509   13059 system_pods.go:61] "csi-hostpathplugin-94pvw" [497ed6df-fb92-4c1c-9fed-f8c4ce6e01ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 19:27:22.325513   13059 system_pods.go:61] "etcd-addons-586464" [e405653d-ab01-46a3-8063-f040e21c9a70] Running
	I0904 19:27:22.325517   13059 system_pods.go:61] "kube-apiserver-addons-586464" [8e6fcc14-374d-4479-8d82-25dde72454af] Running
	I0904 19:27:22.325520   13059 system_pods.go:61] "kube-controller-manager-addons-586464" [49c50b16-ddf4-4480-921e-02b115706b2a] Running
	I0904 19:27:22.325525   13059 system_pods.go:61] "kube-ingress-dns-minikube" [dfe8e912-b2f2-43a8-a910-6081d849d0ff] Running
	I0904 19:27:22.325551   13059 system_pods.go:61] "kube-proxy-mjv7n" [6ede0936-0267-405e-9fca-adc7a8b0e5be] Running
	I0904 19:27:22.325560   13059 system_pods.go:61] "kube-scheduler-addons-586464" [976a9b75-4a8f-4a00-927d-7481f7d61bc8] Running
	I0904 19:27:22.325564   13059 system_pods.go:61] "metrics-server-84c5f94fbc-jtggb" [0f496763-5e7a-440e-8fb3-5f15aa8ff867] Running
	I0904 19:27:22.325567   13059 system_pods.go:61] "nvidia-device-plugin-daemonset-lrw9n" [70f69048-7703-4d70-9f03-60264762e688] Running
	I0904 19:27:22.325570   13059 system_pods.go:61] "registry-6fb4cdfc84-4pdpl" [bc53f46d-169c-40d2-af79-12a0385b32f2] Running
	I0904 19:27:22.325574   13059 system_pods.go:61] "registry-proxy-cmjxq" [8268ad2e-f9a0-47f5-ba61-028fe8e93107] Running
	I0904 19:27:22.325580   13059 system_pods.go:61] "snapshot-controller-56fcc65765-x7ftx" [c5ddfb7d-a3ca-4b31-a6eb-480b48a85c28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 19:27:22.325585   13059 system_pods.go:61] "snapshot-controller-56fcc65765-x9jwl" [9e41d248-61b7-45fc-8d2d-a38d79978d6e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 19:27:22.325590   13059 system_pods.go:61] "storage-provisioner" [01fe09f0-efa7-44cb-a41a-d2afe45c4573] Running
	I0904 19:27:22.325594   13059 system_pods.go:61] "tiller-deploy-b48cc5f79-gnf8s" [160e65ec-e4d0-4b19-baaa-862e4eabf268] Running
	I0904 19:27:22.325599   13059 system_pods.go:74] duration metric: took 10.425911ms to wait for pod list to return data ...
	I0904 19:27:22.325608   13059 default_sa.go:34] waiting for default service account to be created ...
	I0904 19:27:22.329202   13059 default_sa.go:45] found service account: "default"
	I0904 19:27:22.329221   13059 default_sa.go:55] duration metric: took 3.607866ms for default service account to be created ...
	I0904 19:27:22.329228   13059 system_pods.go:116] waiting for k8s-apps to be running ...
	I0904 19:27:22.336126   13059 system_pods.go:86] 18 kube-system pods found
	I0904 19:27:22.336151   13059 system_pods.go:89] "coredns-6f6b679f8f-zfdmz" [dddad47f-b0fb-416e-9ed0-acba231709c2] Running
	I0904 19:27:22.336160   13059 system_pods.go:89] "csi-hostpath-attacher-0" [1a282ee3-ff35-423e-834a-6c783f9da000] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0904 19:27:22.336166   13059 system_pods.go:89] "csi-hostpath-resizer-0" [251b5645-da25-4994-85ec-31967c4bc1d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0904 19:27:22.336174   13059 system_pods.go:89] "csi-hostpathplugin-94pvw" [497ed6df-fb92-4c1c-9fed-f8c4ce6e01ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0904 19:27:22.336179   13059 system_pods.go:89] "etcd-addons-586464" [e405653d-ab01-46a3-8063-f040e21c9a70] Running
	I0904 19:27:22.336184   13059 system_pods.go:89] "kube-apiserver-addons-586464" [8e6fcc14-374d-4479-8d82-25dde72454af] Running
	I0904 19:27:22.336188   13059 system_pods.go:89] "kube-controller-manager-addons-586464" [49c50b16-ddf4-4480-921e-02b115706b2a] Running
	I0904 19:27:22.336194   13059 system_pods.go:89] "kube-ingress-dns-minikube" [dfe8e912-b2f2-43a8-a910-6081d849d0ff] Running
	I0904 19:27:22.336197   13059 system_pods.go:89] "kube-proxy-mjv7n" [6ede0936-0267-405e-9fca-adc7a8b0e5be] Running
	I0904 19:27:22.336200   13059 system_pods.go:89] "kube-scheduler-addons-586464" [976a9b75-4a8f-4a00-927d-7481f7d61bc8] Running
	I0904 19:27:22.336203   13059 system_pods.go:89] "metrics-server-84c5f94fbc-jtggb" [0f496763-5e7a-440e-8fb3-5f15aa8ff867] Running
	I0904 19:27:22.336206   13059 system_pods.go:89] "nvidia-device-plugin-daemonset-lrw9n" [70f69048-7703-4d70-9f03-60264762e688] Running
	I0904 19:27:22.336210   13059 system_pods.go:89] "registry-6fb4cdfc84-4pdpl" [bc53f46d-169c-40d2-af79-12a0385b32f2] Running
	I0904 19:27:22.336213   13059 system_pods.go:89] "registry-proxy-cmjxq" [8268ad2e-f9a0-47f5-ba61-028fe8e93107] Running
	I0904 19:27:22.336219   13059 system_pods.go:89] "snapshot-controller-56fcc65765-x7ftx" [c5ddfb7d-a3ca-4b31-a6eb-480b48a85c28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 19:27:22.336225   13059 system_pods.go:89] "snapshot-controller-56fcc65765-x9jwl" [9e41d248-61b7-45fc-8d2d-a38d79978d6e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0904 19:27:22.336229   13059 system_pods.go:89] "storage-provisioner" [01fe09f0-efa7-44cb-a41a-d2afe45c4573] Running
	I0904 19:27:22.336232   13059 system_pods.go:89] "tiller-deploy-b48cc5f79-gnf8s" [160e65ec-e4d0-4b19-baaa-862e4eabf268] Running
	I0904 19:27:22.336241   13059 system_pods.go:126] duration metric: took 7.008084ms to wait for k8s-apps to be running ...
	I0904 19:27:22.336247   13059 system_svc.go:44] waiting for kubelet service to be running ....
	I0904 19:27:22.336289   13059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 19:27:22.351408   13059 system_svc.go:56] duration metric: took 15.15139ms WaitForService to wait for kubelet
	I0904 19:27:22.351442   13059 kubeadm.go:582] duration metric: took 48.112777478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0904 19:27:22.351464   13059 node_conditions.go:102] verifying NodePressure condition ...
	I0904 19:27:22.354313   13059 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0904 19:27:22.354335   13059 node_conditions.go:123] node cpu capacity is 2
	I0904 19:27:22.354349   13059 node_conditions.go:105] duration metric: took 2.879933ms to run NodePressure ...
	I0904 19:27:22.354359   13059 start.go:241] waiting for startup goroutines ...
	I0904 19:27:22.450091   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:22.526602   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:22.952324   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:23.026297   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:23.449925   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:23.528005   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:23.951691   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:24.026512   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:24.450624   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:24.526510   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:24.950593   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:25.026793   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:25.451367   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:25.525387   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:25.949900   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:26.026076   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:26.452719   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:26.534384   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:26.950289   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:27.027044   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:27.451253   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:27.526577   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:27.950205   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:28.027598   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:28.450583   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:28.525684   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:28.951138   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:29.028153   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:29.450805   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:29.526661   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:29.950161   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:30.026530   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:30.475869   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:30.576784   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:30.951566   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:31.028301   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:31.453190   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:31.528961   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:31.950644   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:32.025710   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:32.451048   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:32.527018   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:32.949498   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:33.026063   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:33.450710   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:33.526531   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:33.950505   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:34.026456   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:34.450750   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:34.526116   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:34.950276   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:35.028049   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:35.449710   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:35.526832   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:36.129967   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:36.131447   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:36.452454   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:36.526545   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:36.951398   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:37.025774   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:37.452953   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:37.551757   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:37.951338   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:38.028990   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:38.450286   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:38.527499   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:38.950038   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:39.026123   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:39.450617   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:39.525767   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:39.950463   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:40.026872   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:40.450128   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:40.533109   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:40.950513   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:41.026447   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:41.451769   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:41.552525   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:41.952796   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:42.026123   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:42.449834   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:42.527445   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:42.949704   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:43.026070   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:43.449623   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:43.525639   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:43.950841   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:44.026271   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:44.451651   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:44.552869   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:44.950451   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:45.026812   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:45.450318   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:45.526737   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:45.950716   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:46.026620   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:46.450984   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:46.526051   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:46.949787   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:47.118056   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:47.452110   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:47.552246   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:47.950230   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:48.028605   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:48.482074   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:48.583418   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:48.951376   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:49.028568   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:49.471050   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:49.535343   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:49.952112   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:50.026881   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:50.455187   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:50.556181   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:50.952394   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:51.051882   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:51.450895   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:51.534809   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:51.950680   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:52.028271   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:52.450904   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:52.526757   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:52.949990   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:53.026088   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:53.612162   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:53.612494   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:53.951010   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:54.025917   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:54.450029   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:54.526301   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:54.950725   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:55.025924   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:55.450841   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:55.527941   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:55.950096   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:56.027120   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:56.451580   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:56.525932   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:56.951225   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:57.026686   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:57.468285   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:57.576788   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:57.966687   13059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0904 19:27:58.061483   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:58.450557   13059 kapi.go:107] duration metric: took 1m14.004587451s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0904 19:27:58.527877   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:59.027618   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:27:59.526772   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:00.035468   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:00.528018   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:01.032553   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:01.526836   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:02.027262   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:02.526527   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:03.027050   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:03.526862   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:04.026874   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0904 19:28:04.527012   13059 kapi.go:107] duration metric: took 1m16.505436958s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0904 19:28:11.649757   13059 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0904 19:28:11.649786   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:12.142826   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:12.642001   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:13.142342   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:13.641814   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:14.141179   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:14.642282   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:15.141546   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:15.642699   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:16.142374   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:16.641675   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:17.141523   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:17.641813   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:18.141781   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:18.642224   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:19.141801   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:19.642467   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:20.141645   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:20.641865   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:21.141595   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:21.642561   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:22.142642   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:22.642217   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:23.142107   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:23.641431   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:24.142328   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:24.641701   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:25.142268   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:25.641648   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:26.142426   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:26.641895   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:27.142063   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:27.642988   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:28.141662   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:28.642014   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:29.143735   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:29.642311   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:30.142389   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:30.641748   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:31.141431   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:31.641908   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:32.141730   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:32.642342   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:33.141706   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:33.641938   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:34.141571   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:34.641768   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:35.142650   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:35.641981   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:36.141654   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:36.642246   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:37.142122   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:37.641501   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:38.143789   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:38.641102   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:39.141619   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:39.642425   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:40.142007   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:40.641065   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:41.141894   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:41.644121   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:42.141935   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:42.641121   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:43.141769   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:43.641237   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:44.142574   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:44.641630   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:45.142130   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:45.641847   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:46.141375   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:46.641661   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:47.142132   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:47.641905   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:48.141791   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:48.642512   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:49.141711   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:49.641932   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:50.142506   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:50.642404   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:51.141824   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:51.640935   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:52.142565   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:52.642895   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:53.141435   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:53.648281   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:54.142441   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:54.641897   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:55.142730   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:55.642373   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:56.141946   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:56.642829   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:57.141585   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:57.641919   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:58.141683   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:58.642243   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:59.141984   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:28:59.641589   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:00.142001   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:00.642325   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:01.141708   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:01.641963   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:02.141413   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:02.642376   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:03.141538   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:03.641843   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:04.141743   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:04.640991   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:05.142142   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:05.641460   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:06.142414   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:06.641659   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:07.142677   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:07.641733   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:08.142539   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:08.641905   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:09.141062   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:09.643816   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:10.142501   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:10.641554   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:11.142465   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:11.642278   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:12.142073   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:12.641320   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:13.142044   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:13.641254   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:14.142078   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:14.641853   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:15.143433   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:15.642345   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:16.141699   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:16.642484   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:17.141746   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:17.641813   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:18.141396   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:18.641904   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:19.141542   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:19.642767   13059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0904 19:29:20.143307   13059 kapi.go:107] duration metric: took 2m30.505390571s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0904 19:29:20.145219   13059 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-586464 cluster.
	I0904 19:29:20.146600   13059 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0904 19:29:20.148032   13059 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0904 19:29:20.149749   13059 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, storage-provisioner, volcano, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0904 19:29:20.151166   13059 addons.go:510] duration metric: took 2m45.912452282s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns storage-provisioner volcano cloud-spanner helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0904 19:29:20.151205   13059 start.go:246] waiting for cluster config update ...
	I0904 19:29:20.151222   13059 start.go:255] writing updated cluster config ...
	I0904 19:29:20.151478   13059 ssh_runner.go:195] Run: rm -f paused
	I0904 19:29:20.203740   13059 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0904 19:29:20.205707   13059 out.go:177] * Done! kubectl is now configured to use "addons-586464" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 04 19:39:13 addons-586464 dockerd[1193]: time="2024-09-04T19:39:13.138576427Z" level=info msg="shim disconnected" id=2b00c39853a051aeefeea1d69a3ab37c7b8e2ee7bc3e02f061ba4e9fcf9cff11 namespace=moby
	Sep 04 19:39:13 addons-586464 dockerd[1193]: time="2024-09-04T19:39:13.138718407Z" level=warning msg="cleaning up after shim disconnected" id=2b00c39853a051aeefeea1d69a3ab37c7b8e2ee7bc3e02f061ba4e9fcf9cff11 namespace=moby
	Sep 04 19:39:13 addons-586464 dockerd[1193]: time="2024-09-04T19:39:13.138942362Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:39:17 addons-586464 dockerd[1187]: time="2024-09-04T19:39:17.921493258Z" level=info msg="ignoring event" container=7400ff5b45b1a803a5a54f68d86f252124079f362b68f8b9e95a0d3f368f7da5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:39:17 addons-586464 dockerd[1193]: time="2024-09-04T19:39:17.924530769Z" level=info msg="shim disconnected" id=7400ff5b45b1a803a5a54f68d86f252124079f362b68f8b9e95a0d3f368f7da5 namespace=moby
	Sep 04 19:39:17 addons-586464 dockerd[1193]: time="2024-09-04T19:39:17.924963917Z" level=warning msg="cleaning up after shim disconnected" id=7400ff5b45b1a803a5a54f68d86f252124079f362b68f8b9e95a0d3f368f7da5 namespace=moby
	Sep 04 19:39:17 addons-586464 dockerd[1193]: time="2024-09-04T19:39:17.925088019Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1187]: time="2024-09-04T19:39:18.384761941Z" level=info msg="ignoring event" container=50bea3b97e3b0763606031d77c3a106e6c2cbeb1be050cec39ddeeb0968e795b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.396502757Z" level=info msg="shim disconnected" id=50bea3b97e3b0763606031d77c3a106e6c2cbeb1be050cec39ddeeb0968e795b namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.396880168Z" level=warning msg="cleaning up after shim disconnected" id=50bea3b97e3b0763606031d77c3a106e6c2cbeb1be050cec39ddeeb0968e795b namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.396990320Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1187]: time="2024-09-04T19:39:18.461979929Z" level=info msg="ignoring event" container=9aa24cda4484def3629670fb3b446d7b0a6d677bfa974e5a5be76d23a642fb15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.464841821Z" level=info msg="shim disconnected" id=9aa24cda4484def3629670fb3b446d7b0a6d677bfa974e5a5be76d23a642fb15 namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.464902467Z" level=warning msg="cleaning up after shim disconnected" id=9aa24cda4484def3629670fb3b446d7b0a6d677bfa974e5a5be76d23a642fb15 namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.464912239Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:39:18 addons-586464 cri-dockerd[1088]: time="2024-09-04T19:39:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-6fb4cdfc84-4pdpl_kube-system\": unexpected command output nsenter: cannot open /proc/3088/ns/net: No such file or directory\n with error: exit status 1"
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.591551533Z" level=info msg="shim disconnected" id=ce16bf14869ccdd0f1cee07727d076ea4e8e99161c7537c23f5ead76f025f4f7 namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.591654775Z" level=warning msg="cleaning up after shim disconnected" id=ce16bf14869ccdd0f1cee07727d076ea4e8e99161c7537c23f5ead76f025f4f7 namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.591665465Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1187]: time="2024-09-04T19:39:18.592238155Z" level=info msg="ignoring event" container=ce16bf14869ccdd0f1cee07727d076ea4e8e99161c7537c23f5ead76f025f4f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.638011536Z" level=warning msg="cleanup warnings time=\"2024-09-04T19:39:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1187]: time="2024-09-04T19:39:18.705143474Z" level=info msg="ignoring event" container=0e5dc3b94a1282441285564c262a3b2f58dd698697c12c0f7193e2342baccd54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.707546643Z" level=info msg="shim disconnected" id=0e5dc3b94a1282441285564c262a3b2f58dd698697c12c0f7193e2342baccd54 namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.707608154Z" level=warning msg="cleaning up after shim disconnected" id=0e5dc3b94a1282441285564c262a3b2f58dd698697c12c0f7193e2342baccd54 namespace=moby
	Sep 04 19:39:18 addons-586464 dockerd[1193]: time="2024-09-04T19:39:18.707618918Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	22cdbd1271566       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  15 seconds ago      Running             hello-world-app           0                   8b87fac5484ce       hello-world-app-55bf9c44b4-95kfw
	6a3b4489ed527       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                24 seconds ago      Running             nginx                     0                   62afc12666c1b       nginx
	8b1851e9b72a5       a416a98b71e22                                                                                                                45 seconds ago      Exited              helper-pod                0                   0f219c836a26f       helper-pod-delete-pvc-fce74acb-36f4-4b36-9f36-2a553b5bb45c
	5c9840190d63c       busybox@sha256:34b191d63fbc93e25e275bfccf1b5365664e5ac28f06d974e8d50090fbb49f41                                              48 seconds ago      Exited              busybox                   0                   923771d99f84e       test-local-path
	f0e8a7353d8d5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   f40f4cc0e02ec       gcp-auth-89d5ffd79-j2zr4
	00c583aa93356       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                     1                   49d73ccbca852       ingress-nginx-admission-patch-m55wh
	68aa95dd0a2e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   f4bddd0f7c0f6       ingress-nginx-admission-create-8w5kr
	9aa24cda4484d       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   0e5dc3b94a128       registry-proxy-cmjxq
	50bea3b97e3b0       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             12 minutes ago      Exited              registry                  0                   ce16bf14869cc       registry-6fb4cdfc84-4pdpl
	09ea498b10582       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   7f0da31450d52       storage-provisioner
	7e72142ef189a       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   ddcb1e3bf07c9       coredns-6f6b679f8f-zfdmz
	af060afeaca00       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   b7c32a7fd3c09       kube-proxy-mjv7n
	000bcf7da9646       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   310918c2f2e5b       kube-scheduler-addons-586464
	99a43d057e008       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   386dbaaedb87e       etcd-addons-586464
	cec3bced85acf       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   a41d2becfe3a4       kube-controller-manager-addons-586464
	c1d16f8628ac8       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   65f61a9e54aad       kube-apiserver-addons-586464
	
	
	==> coredns [7e72142ef189] <==
	[INFO] 10.244.0.23:55383 - 6663 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000137844s
	[INFO] 10.244.0.23:55383 - 17694 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100751s
	[INFO] 10.244.0.23:35129 - 23158 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000108527s
	[INFO] 10.244.0.23:35129 - 43533 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049872s
	[INFO] 10.244.0.23:55383 - 50171 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000142739s
	[INFO] 10.244.0.23:35129 - 58411 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054145s
	[INFO] 10.244.0.23:35129 - 50030 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003815s
	[INFO] 10.244.0.23:55383 - 29255 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000148499s
	[INFO] 10.244.0.23:35129 - 22858 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138281s
	[INFO] 10.244.0.23:35129 - 27735 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000139815s
	[INFO] 10.244.0.23:55383 - 55216 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000173348s
	[INFO] 10.244.0.23:36802 - 61275 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000109013s
	[INFO] 10.244.0.23:36802 - 38042 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096403s
	[INFO] 10.244.0.23:36802 - 2934 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000103942s
	[INFO] 10.244.0.23:36802 - 52162 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113871s
	[INFO] 10.244.0.23:36802 - 48046 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099082s
	[INFO] 10.244.0.23:36802 - 41288 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045758s
	[INFO] 10.244.0.23:36802 - 8471 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045843s
	[INFO] 10.244.0.23:38127 - 42384 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063926s
	[INFO] 10.244.0.23:38127 - 53878 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062301s
	[INFO] 10.244.0.23:38127 - 54391 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059024s
	[INFO] 10.244.0.23:38127 - 55251 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044116s
	[INFO] 10.244.0.23:38127 - 64119 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101012s
	[INFO] 10.244.0.23:38127 - 61193 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043555s
	[INFO] 10.244.0.23:38127 - 40944 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058101s
	
	
	==> describe nodes <==
	Name:               addons-586464
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-586464
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8bb47038f7304b869a8e06758662cf35b40689af
	                    minikube.k8s.io/name=addons-586464
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_04T19_26_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-586464
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Sep 2024 19:26:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-586464
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Sep 2024 19:39:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Sep 2024 19:39:05 +0000   Wed, 04 Sep 2024 19:26:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Sep 2024 19:39:05 +0000   Wed, 04 Sep 2024 19:26:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Sep 2024 19:39:05 +0000   Wed, 04 Sep 2024 19:26:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Sep 2024 19:39:05 +0000   Wed, 04 Sep 2024 19:26:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    addons-586464
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de6e798bf9d47fe9625742905ea4249
	  System UUID:                2de6e798-bf9d-47fe-9625-742905ea4249
	  Boot ID:                    e0dc414f-4633-4032-8e49-9a254a09c336
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     hello-world-app-55bf9c44b4-95kfw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  gcp-auth                    gcp-auth-89d5ffd79-j2zr4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-zfdmz                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-586464                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-586464             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-586464    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mjv7n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-586464             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-586464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-586464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-586464 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-586464 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-586464 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-586464 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-586464 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-586464 event: Registered Node addons-586464 in Controller
	
	
	==> dmesg <==
	[  +5.426151] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.749385] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.038287] kauditd_printk_skb: 30 callbacks suppressed
	[  +8.240174] kauditd_printk_skb: 54 callbacks suppressed
	[Sep 4 19:28] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.458658] kauditd_printk_skb: 33 callbacks suppressed
	[ +43.641117] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 4 19:29] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.838582] kauditd_printk_skb: 40 callbacks suppressed
	[ +27.425259] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 4 19:30] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.206892] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 4 19:33] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 4 19:38] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.593134] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.936572] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.970342] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.012222] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.769329] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.759865] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.309861] kauditd_printk_skb: 19 callbacks suppressed
	[Sep 4 19:39] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.986529] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.056024] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.003573] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [99a43d057e00] <==
	{"level":"warn","ts":"2024-09-04T19:27:53.571831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.417851ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T19:27:53.571870Z","caller":"traceutil/trace.go:171","msg":"trace[1153299303] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1287; }","duration":"182.451145ms","start":"2024-09-04T19:27:53.389406Z","end":"2024-09-04T19:27:53.571857Z","steps":["trace[1153299303] 'range keys from in-memory index tree'  (duration: 182.374075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:27:53.571990Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.798847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-04T19:27:53.571995Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"417.606478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-04T19:27:53.572029Z","caller":"traceutil/trace.go:171","msg":"trace[1414245574] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1287; }","duration":"417.648024ms","start":"2024-09-04T19:27:53.154372Z","end":"2024-09-04T19:27:53.572020Z","steps":["trace[1414245574] 'range keys from in-memory index tree'  (duration: 417.432493ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:27:53.572004Z","caller":"traceutil/trace.go:171","msg":"trace[221798541] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1287; }","duration":"152.815736ms","start":"2024-09-04T19:27:53.419184Z","end":"2024-09-04T19:27:53.572000Z","steps":["trace[221798541] 'range keys from in-memory index tree'  (duration: 152.760804ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:27:53.572052Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T19:27:53.154330Z","time spent":"417.717419ms","remote":"127.0.0.1:42466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-04T19:27:53.572322Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.71828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-04T19:27:53.572345Z","caller":"traceutil/trace.go:171","msg":"trace[1563664793] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:1287; }","duration":"250.743766ms","start":"2024-09-04T19:27:53.321594Z","end":"2024-09-04T19:27:53.572338Z","steps":["trace[1563664793] 'count revisions from in-memory index tree'  (duration: 250.572553ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:27:57.430571Z","caller":"traceutil/trace.go:171","msg":"trace[1162195935] linearizableReadLoop","detail":"{readStateIndex:1328; appliedIndex:1327; }","duration":"320.033506ms","start":"2024-09-04T19:27:57.110516Z","end":"2024-09-04T19:27:57.430549Z","steps":["trace[1162195935] 'read index received'  (duration: 319.891007ms)","trace[1162195935] 'applied index is now lower than readState.Index'  (duration: 142.044µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-04T19:27:57.431084Z","caller":"traceutil/trace.go:171","msg":"trace[199396741] transaction","detail":"{read_only:false; response_revision:1294; number_of_response:1; }","duration":"358.857645ms","start":"2024-09-04T19:27:57.072216Z","end":"2024-09-04T19:27:57.431073Z","steps":["trace[199396741] 'process raft request'  (duration: 358.236849ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:27:57.431459Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T19:27:57.072199Z","time spent":"358.93648ms","remote":"127.0.0.1:42778","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3362,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-queues-mutate\" mod_revision:867 > success:<request_put:<key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-queues-mutate\" value_size:3275 >> failure:<request_range:<key:\"/registry/mutatingwebhookconfigurations/volcano-admission-service-queues-mutate\" > >"}
	{"level":"warn","ts":"2024-09-04T19:27:57.432152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.643639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T19:27:57.432222Z","caller":"traceutil/trace.go:171","msg":"trace[453238209] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1294; }","duration":"321.721758ms","start":"2024-09-04T19:27:57.110492Z","end":"2024-09-04T19:27:57.432214Z","steps":["trace[453238209] 'agreement among raft nodes before linearized reading'  (duration: 321.528437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:27:57.432247Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-04T19:27:57.110458Z","time spent":"321.782975ms","remote":"127.0.0.1:42486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-04T19:27:58.967631Z","caller":"traceutil/trace.go:171","msg":"trace[1976710215] transaction","detail":"{read_only:false; response_revision:1311; number_of_response:1; }","duration":"106.04238ms","start":"2024-09-04T19:27:58.861563Z","end":"2024-09-04T19:27:58.967606Z","steps":["trace[1976710215] 'process raft request'  (duration: 74.928541ms)","trace[1976710215] 'compare'  (duration: 31.014262ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-04T19:28:02.222226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.413025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T19:28:02.223572Z","caller":"traceutil/trace.go:171","msg":"trace[1656552788] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1335; }","duration":"112.765932ms","start":"2024-09-04T19:28:02.110781Z","end":"2024-09-04T19:28:02.223547Z","steps":["trace[1656552788] 'range keys from in-memory index tree'  (duration: 111.348725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-04T19:29:46.294243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.217896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-04T19:29:46.294391Z","caller":"traceutil/trace.go:171","msg":"trace[2054379640] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1626; }","duration":"122.407849ms","start":"2024-09-04T19:29:46.171965Z","end":"2024-09-04T19:29:46.294373Z","steps":["trace[2054379640] 'range keys from in-memory index tree'  (duration: 122.104835ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-04T19:36:24.929926Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1928}
	{"level":"info","ts":"2024-09-04T19:36:25.028869Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1928,"took":"97.253991ms","hash":3528258824,"current-db-size-bytes":8826880,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":5033984,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-04T19:36:25.028938Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3528258824,"revision":1928,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-04T19:38:36.513494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.326373ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-04T19:38:36.513627Z","caller":"traceutil/trace.go:171","msg":"trace[1092014494] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2792; }","duration":"123.509966ms","start":"2024-09-04T19:38:36.390097Z","end":"2024-09-04T19:38:36.513607Z","steps":["trace[1092014494] 'range keys from in-memory index tree'  (duration: 123.304951ms)"],"step_count":1}
	
	
	==> gcp-auth [f0e8a7353d8d] <==
	2024/09/04 19:30:03 Ready to write response ...
	2024/09/04 19:38:17 Ready to marshal response ...
	2024/09/04 19:38:17 Ready to write response ...
	2024/09/04 19:38:17 Ready to marshal response ...
	2024/09/04 19:38:17 Ready to write response ...
	2024/09/04 19:38:23 Ready to marshal response ...
	2024/09/04 19:38:23 Ready to write response ...
	2024/09/04 19:38:23 Ready to marshal response ...
	2024/09/04 19:38:23 Ready to write response ...
	2024/09/04 19:38:31 Ready to marshal response ...
	2024/09/04 19:38:31 Ready to write response ...
	2024/09/04 19:38:31 Ready to marshal response ...
	2024/09/04 19:38:31 Ready to write response ...
	2024/09/04 19:38:31 Ready to marshal response ...
	2024/09/04 19:38:31 Ready to write response ...
	2024/09/04 19:38:33 Ready to marshal response ...
	2024/09/04 19:38:33 Ready to write response ...
	2024/09/04 19:38:45 Ready to marshal response ...
	2024/09/04 19:38:45 Ready to write response ...
	2024/09/04 19:38:50 Ready to marshal response ...
	2024/09/04 19:38:50 Ready to write response ...
	2024/09/04 19:39:01 Ready to marshal response ...
	2024/09/04 19:39:01 Ready to write response ...
	2024/09/04 19:39:07 Ready to marshal response ...
	2024/09/04 19:39:07 Ready to write response ...
	
	
	==> kernel <==
	 19:39:19 up 13 min,  0 users,  load average: 1.34, 1.06, 0.78
	Linux addons-586464 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c1d16f8628ac] <==
	W0904 19:29:55.513567       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0904 19:29:55.559926       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0904 19:29:55.932874       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0904 19:38:12.105252       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0904 19:38:13.186308       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0904 19:38:23.113829       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0904 19:38:27.036813       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0904 19:38:31.155466       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.89.172"}
	E0904 19:38:49.951344       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0904 19:38:50.186162       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0904 19:38:50.394665       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.19.1"}
	I0904 19:39:01.940213       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.148.222"}
	I0904 19:39:02.316872       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:39:02.316912       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:39:02.354000       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:39:02.354091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:39:02.365222       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:39:02.368939       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:39:02.409051       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:39:02.409326       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0904 19:39:02.463094       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0904 19:39:02.463671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0904 19:39:03.366762       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0904 19:39:03.463799       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0904 19:39:03.519851       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [cec3bced85ac] <==
	I0904 19:39:05.270402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="8.886236ms"
	I0904 19:39:05.271030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.36µs"
	W0904 19:39:05.627433       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:05.627471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:06.647734       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:06.647913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:07.603868       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:07.604218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:07.844635       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:07.844933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:08.881847       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:08.881888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:10.214004       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:10.214043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:11.394890       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:11.395031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0904 19:39:12.558076       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:12.558116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 19:39:12.941095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="3.328µs"
	W0904 19:39:14.855372       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:14.855436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0904 19:39:14.959054       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0904 19:39:18.308701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="5.46µs"
	W0904 19:39:18.396258       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0904 19:39:18.396345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [af060afeaca0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0904 19:26:35.559416       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0904 19:26:35.586830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	E0904 19:26:35.586927       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0904 19:26:35.871169       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0904 19:26:35.871201       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0904 19:26:35.871231       1 server_linux.go:169] "Using iptables Proxier"
	I0904 19:26:35.877830       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0904 19:26:35.878157       1 server.go:483] "Version info" version="v1.31.0"
	I0904 19:26:35.878168       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0904 19:26:35.879227       1 config.go:197] "Starting service config controller"
	I0904 19:26:35.879312       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0904 19:26:35.879332       1 config.go:326] "Starting node config controller"
	I0904 19:26:35.879336       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0904 19:26:35.879557       1 config.go:104] "Starting endpoint slice config controller"
	I0904 19:26:35.879568       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0904 19:26:35.980424       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0904 19:26:35.980540       1 shared_informer.go:320] Caches are synced for service config
	I0904 19:26:35.980545       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [000bcf7da964] <==
	W0904 19:26:26.268233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0904 19:26:26.268325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0904 19:26:26.275552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0904 19:26:26.275453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.121246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0904 19:26:27.121507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.154947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0904 19:26:27.154991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.313922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0904 19:26:27.314030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.337573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0904 19:26:27.337787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.424144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0904 19:26:27.424409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.425937       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0904 19:26:27.425963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.493814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0904 19:26:27.493865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.527478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0904 19:26:27.527525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.550707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0904 19:26:27.550766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0904 19:26:27.833459       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0904 19:26:27.834103       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0904 19:26:30.652377       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 04 19:39:12 addons-586464 kubelet[1951]: I0904 19:39:12.705916    1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bxsmq\" (UniqueName: \"kubernetes.io/projected/11987714-9214-4e14-93f2-8acd046dd8fa-kube-api-access-bxsmq\") on node \"addons-586464\" DevicePath \"\""
	Sep 04 19:39:12 addons-586464 kubelet[1951]: E0904 19:39:12.802049    1951 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f7be796b-cf4b-45a5-8931-b8f5912a74f4"
	Sep 04 19:39:12 addons-586464 kubelet[1951]: I0904 19:39:12.811492    1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11987714-9214-4e14-93f2-8acd046dd8fa" path="/var/lib/kubelet/pods/11987714-9214-4e14-93f2-8acd046dd8fa/volumes"
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.311027    1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcrbj\" (UniqueName: \"kubernetes.io/projected/160e65ec-e4d0-4b19-baaa-862e4eabf268-kube-api-access-wcrbj\") pod \"160e65ec-e4d0-4b19-baaa-862e4eabf268\" (UID: \"160e65ec-e4d0-4b19-baaa-862e4eabf268\") "
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.317080    1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160e65ec-e4d0-4b19-baaa-862e4eabf268-kube-api-access-wcrbj" (OuterVolumeSpecName: "kube-api-access-wcrbj") pod "160e65ec-e4d0-4b19-baaa-862e4eabf268" (UID: "160e65ec-e4d0-4b19-baaa-862e4eabf268"). InnerVolumeSpecName "kube-api-access-wcrbj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.410341    1951 scope.go:117] "RemoveContainer" containerID="f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3"
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.414596    1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wcrbj\" (UniqueName: \"kubernetes.io/projected/160e65ec-e4d0-4b19-baaa-862e4eabf268-kube-api-access-wcrbj\") on node \"addons-586464\" DevicePath \"\""
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.440782    1951 scope.go:117] "RemoveContainer" containerID="f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3"
	Sep 04 19:39:13 addons-586464 kubelet[1951]: E0904 19:39:13.442203    1951 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3" containerID="f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3"
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.442259    1951 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3"} err="failed to get container status \"f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3\": rpc error: code = Unknown desc = Error response from daemon: No such container: f9c6107544b0ce7d14f0a4afd89ade151e2779d0f57845273f83571f0fb055f3"
	Sep 04 19:39:13 addons-586464 kubelet[1951]: I0904 19:39:13.442342    1951 scope.go:117] "RemoveContainer" containerID="3b20b5f44d27d1250ae6b4de778ea01ec9820f6b9ab61bed7538242e9d85429d"
	Sep 04 19:39:14 addons-586464 kubelet[1951]: I0904 19:39:14.810440    1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160e65ec-e4d0-4b19-baaa-862e4eabf268" path="/var/lib/kubelet/pods/160e65ec-e4d0-4b19-baaa-862e4eabf268/volumes"
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.048742    1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5llng\" (UniqueName: \"kubernetes.io/projected/9a23a44f-c09c-4635-aa1d-a21a5339b39b-kube-api-access-5llng\") pod \"9a23a44f-c09c-4635-aa1d-a21a5339b39b\" (UID: \"9a23a44f-c09c-4635-aa1d-a21a5339b39b\") "
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.048799    1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9a23a44f-c09c-4635-aa1d-a21a5339b39b-gcp-creds\") pod \"9a23a44f-c09c-4635-aa1d-a21a5339b39b\" (UID: \"9a23a44f-c09c-4635-aa1d-a21a5339b39b\") "
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.048891    1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a23a44f-c09c-4635-aa1d-a21a5339b39b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9a23a44f-c09c-4635-aa1d-a21a5339b39b" (UID: "9a23a44f-c09c-4635-aa1d-a21a5339b39b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.055320    1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a23a44f-c09c-4635-aa1d-a21a5339b39b-kube-api-access-5llng" (OuterVolumeSpecName: "kube-api-access-5llng") pod "9a23a44f-c09c-4635-aa1d-a21a5339b39b" (UID: "9a23a44f-c09c-4635-aa1d-a21a5339b39b"). InnerVolumeSpecName "kube-api-access-5llng". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.150089    1951 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9a23a44f-c09c-4635-aa1d-a21a5339b39b-gcp-creds\") on node \"addons-586464\" DevicePath \"\""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.150115    1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5llng\" (UniqueName: \"kubernetes.io/projected/9a23a44f-c09c-4635-aa1d-a21a5339b39b-kube-api-access-5llng\") on node \"addons-586464\" DevicePath \"\""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.754168    1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5lxj\" (UniqueName: \"kubernetes.io/projected/bc53f46d-169c-40d2-af79-12a0385b32f2-kube-api-access-l5lxj\") pod \"bc53f46d-169c-40d2-af79-12a0385b32f2\" (UID: \"bc53f46d-169c-40d2-af79-12a0385b32f2\") "
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.757607    1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc53f46d-169c-40d2-af79-12a0385b32f2-kube-api-access-l5lxj" (OuterVolumeSpecName: "kube-api-access-l5lxj") pod "bc53f46d-169c-40d2-af79-12a0385b32f2" (UID: "bc53f46d-169c-40d2-af79-12a0385b32f2"). InnerVolumeSpecName "kube-api-access-l5lxj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.810425    1951 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a23a44f-c09c-4635-aa1d-a21a5339b39b" path="/var/lib/kubelet/pods/9a23a44f-c09c-4635-aa1d-a21a5339b39b/volumes"
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.855424    1951 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t72w8\" (UniqueName: \"kubernetes.io/projected/8268ad2e-f9a0-47f5-ba61-028fe8e93107-kube-api-access-t72w8\") pod \"8268ad2e-f9a0-47f5-ba61-028fe8e93107\" (UID: \"8268ad2e-f9a0-47f5-ba61-028fe8e93107\") "
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.856045    1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l5lxj\" (UniqueName: \"kubernetes.io/projected/bc53f46d-169c-40d2-af79-12a0385b32f2-kube-api-access-l5lxj\") on node \"addons-586464\" DevicePath \"\""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.858542    1951 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8268ad2e-f9a0-47f5-ba61-028fe8e93107-kube-api-access-t72w8" (OuterVolumeSpecName: "kube-api-access-t72w8") pod "8268ad2e-f9a0-47f5-ba61-028fe8e93107" (UID: "8268ad2e-f9a0-47f5-ba61-028fe8e93107"). InnerVolumeSpecName "kube-api-access-t72w8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 04 19:39:18 addons-586464 kubelet[1951]: I0904 19:39:18.957446    1951 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t72w8\" (UniqueName: \"kubernetes.io/projected/8268ad2e-f9a0-47f5-ba61-028fe8e93107-kube-api-access-t72w8\") on node \"addons-586464\" DevicePath \"\""
	
	
	==> storage-provisioner [09ea498b1058] <==
	I0904 19:26:43.605618       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0904 19:26:43.703230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0904 19:26:43.704991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0904 19:26:43.827781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0904 19:26:43.828030       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-586464_fb9ed88a-01b2-45c0-ba95-1196ed0a1f2c!
	I0904 19:26:43.834509       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd890f9c-d156-4e39-b088-d0a63cbdf28d", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-586464_fb9ed88a-01b2-45c0-ba95-1196ed0a1f2c became leader
	I0904 19:26:43.931474       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-586464_fb9ed88a-01b2-45c0-ba95-1196ed0a1f2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-586464 -n addons-586464
helpers_test.go:261: (dbg) Run:  kubectl --context addons-586464 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-586464 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-586464 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-586464/192.168.39.55
	Start Time:       Wed, 04 Sep 2024 19:30:03 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-254pn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-254pn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-586464
	  Warning  Failed     7m54s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m41s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m11s (x22 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (74.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (134.574092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.13s)

                                                
                                    

Test pass (308/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.4
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 4.98
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 124.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 220.44
29 TestAddons/serial/Volcano 42.94
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 22.08
35 TestAddons/parallel/InspektorGadget 11.71
36 TestAddons/parallel/MetricsServer 6.87
37 TestAddons/parallel/HelmTiller 10.21
39 TestAddons/parallel/CSI 57.18
40 TestAddons/parallel/Headlamp 19.55
41 TestAddons/parallel/CloudSpanner 6.51
42 TestAddons/parallel/LocalPath 54.27
43 TestAddons/parallel/NvidiaDevicePlugin 6.49
44 TestAddons/parallel/Yakd 10.65
45 TestAddons/StoppedEnableDisable 13.56
46 TestCertOptions 68.81
47 TestCertExpiration 295.77
48 TestDockerFlags 58.59
49 TestForceSystemdFlag 76.55
50 TestForceSystemdEnv 76.58
52 TestKVMDriverInstallOrUpdate 4.27
56 TestErrorSpam/setup 46.25
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.18
60 TestErrorSpam/unpause 1.39
61 TestErrorSpam/stop 16.13
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 94.97
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.41
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.15
73 TestFunctional/serial/CacheCmd/cache/add_local 1.24
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.09
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 41.87
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.92
84 TestFunctional/serial/LogsFileCmd 0.95
85 TestFunctional/serial/InvalidService 5.94
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 15.6
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.83
95 TestFunctional/parallel/ServiceCmdConnect 19.45
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 48.22
99 TestFunctional/parallel/SSHCmd 0.42
100 TestFunctional/parallel/CpCmd 1.32
101 TestFunctional/parallel/MySQL 29.94
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.32
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
112 TestFunctional/parallel/DockerEnv/bash 0.84
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
116 TestFunctional/parallel/Version/short 0.04
117 TestFunctional/parallel/Version/components 0.9
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.36
123 TestFunctional/parallel/ImageCommands/Setup 1.54
124 TestFunctional/parallel/ServiceCmd/DeployApp 26.2
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.82
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
141 TestFunctional/parallel/ServiceCmd/List 0.42
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
145 TestFunctional/parallel/ServiceCmd/Format 0.33
146 TestFunctional/parallel/ProfileCmd/profile_list 0.26
147 TestFunctional/parallel/ServiceCmd/URL 0.28
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
149 TestFunctional/parallel/MountCmd/any-port 7.53
150 TestFunctional/parallel/MountCmd/specific-port 1.52
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.02
155 TestGvisorAddon 188.91
158 TestMultiControlPlane/serial/StartCluster 216.13
159 TestMultiControlPlane/serial/DeployApp 6.4
160 TestMultiControlPlane/serial/PingHostFromPods 1.2
161 TestMultiControlPlane/serial/AddWorkerNode 62.16
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
164 TestMultiControlPlane/serial/CopyFile 12.55
165 TestMultiControlPlane/serial/StopSecondaryNode 13.23
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.41
167 TestMultiControlPlane/serial/RestartSecondaryNode 43.48
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.5
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 223.27
170 TestMultiControlPlane/serial/DeleteSecondaryNode 7.04
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/StopCluster 37.61
173 TestMultiControlPlane/serial/RestartCluster 121.73
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
175 TestMultiControlPlane/serial/AddSecondaryNode 188.09
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
179 TestImageBuild/serial/Setup 49.06
180 TestImageBuild/serial/NormalBuild 2.08
181 TestImageBuild/serial/BuildWithBuildArg 1.25
182 TestImageBuild/serial/BuildWithDockerIgnore 1.02
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.81
187 TestJSONOutput/start/Command 86.9
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.55
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.54
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.49
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.19
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 102.02
219 TestMountStart/serial/StartWithMountFirst 27.77
220 TestMountStart/serial/VerifyMountFirst 0.37
221 TestMountStart/serial/StartWithMountSecond 27.5
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.85
224 TestMountStart/serial/VerifyMountPostDelete 0.37
225 TestMountStart/serial/Stop 3.28
226 TestMountStart/serial/RestartStopped 26.66
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 126.77
231 TestMultiNode/serial/DeployApp2Nodes 4.15
232 TestMultiNode/serial/PingHostFrom2Pods 0.8
233 TestMultiNode/serial/AddNode 56.81
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.21
236 TestMultiNode/serial/CopyFile 6.99
237 TestMultiNode/serial/StopNode 3.35
238 TestMultiNode/serial/StartAfterStop 42.04
239 TestMultiNode/serial/RestartKeepsNodes 191.68
240 TestMultiNode/serial/DeleteNode 2.26
241 TestMultiNode/serial/StopMultiNode 25.08
242 TestMultiNode/serial/RestartMultiNode 117.12
243 TestMultiNode/serial/ValidateNameConflict 49.5
248 TestPreload 190.8
250 TestScheduledStopUnix 122.02
251 TestSkaffold 129.43
254 TestRunningBinaryUpgrade 226.71
256 TestKubernetesUpgrade 272.41
260 TestPause/serial/Start 86.94
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
273 TestNoKubernetes/serial/StartWithK8s 105.87
274 TestPause/serial/SecondStartNoReconfiguration 64.39
275 TestNoKubernetes/serial/StartWithStopK8s 20.25
276 TestNoKubernetes/serial/Start 48.73
277 TestPause/serial/Pause 0.68
278 TestPause/serial/VerifyStatus 0.35
279 TestPause/serial/Unpause 0.71
280 TestPause/serial/PauseAgain 0.93
281 TestPause/serial/DeletePaused 1.13
282 TestPause/serial/VerifyDeletedResources 0.46
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
284 TestNoKubernetes/serial/ProfileList 22.42
285 TestNoKubernetes/serial/Stop 2.33
286 TestNoKubernetes/serial/StartNoArgs 30.06
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
288 TestStoppedBinaryUpgrade/Setup 0.58
289 TestStoppedBinaryUpgrade/Upgrade 187.75
297 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
298 TestNetworkPlugins/group/auto/Start 118.71
299 TestNetworkPlugins/group/kindnet/Start 101.8
300 TestNetworkPlugins/group/calico/Start 109.86
301 TestNetworkPlugins/group/auto/KubeletFlags 0.21
302 TestNetworkPlugins/group/auto/NetCatPod 11.22
303 TestNetworkPlugins/group/auto/DNS 0.19
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/auto/Localhost 0.15
306 TestNetworkPlugins/group/auto/HairPin 0.16
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
308 TestNetworkPlugins/group/kindnet/NetCatPod 14.32
309 TestNetworkPlugins/group/custom-flannel/Start 76.05
310 TestNetworkPlugins/group/kindnet/DNS 0.25
311 TestNetworkPlugins/group/kindnet/Localhost 0.16
312 TestNetworkPlugins/group/kindnet/HairPin 0.18
313 TestNetworkPlugins/group/calico/ControllerPod 6.01
314 TestNetworkPlugins/group/false/Start 71.15
315 TestNetworkPlugins/group/calico/KubeletFlags 0.22
316 TestNetworkPlugins/group/calico/NetCatPod 12.27
317 TestNetworkPlugins/group/enable-default-cni/Start 93.17
318 TestNetworkPlugins/group/calico/DNS 0.19
319 TestNetworkPlugins/group/calico/Localhost 0.16
320 TestNetworkPlugins/group/calico/HairPin 0.16
321 TestNetworkPlugins/group/flannel/Start 106.87
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
324 TestNetworkPlugins/group/custom-flannel/DNS 0.33
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
327 TestNetworkPlugins/group/false/KubeletFlags 0.24
328 TestNetworkPlugins/group/false/NetCatPod 13.26
329 TestNetworkPlugins/group/false/DNS 0.16
330 TestNetworkPlugins/group/false/Localhost 0.14
331 TestNetworkPlugins/group/false/HairPin 0.17
332 TestNetworkPlugins/group/bridge/Start 107.95
333 TestNetworkPlugins/group/kubenet/Start 86.88
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
340 TestStartStop/group/old-k8s-version/serial/FirstStart 198.59
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
343 TestNetworkPlugins/group/flannel/NetCatPod 10.22
344 TestNetworkPlugins/group/flannel/DNS 0.22
345 TestNetworkPlugins/group/flannel/Localhost 0.21
346 TestNetworkPlugins/group/flannel/HairPin 0.19
348 TestStartStop/group/no-preload/serial/FirstStart 116.03
349 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
350 TestNetworkPlugins/group/kubenet/NetCatPod 13.31
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
352 TestNetworkPlugins/group/bridge/NetCatPod 12.68
353 TestNetworkPlugins/group/kubenet/DNS 16.81
354 TestNetworkPlugins/group/bridge/DNS 0.16
355 TestNetworkPlugins/group/bridge/Localhost 0.14
356 TestNetworkPlugins/group/bridge/HairPin 0.13
357 TestNetworkPlugins/group/kubenet/Localhost 0.16
358 TestNetworkPlugins/group/kubenet/HairPin 0.19
360 TestStartStop/group/embed-certs/serial/FirstStart 204.77
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.85
363 TestStartStop/group/no-preload/serial/DeployApp 10.33
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
365 TestStartStop/group/no-preload/serial/Stop 12.7
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
367 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
368 TestStartStop/group/no-preload/serial/SecondStart 300.55
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
370 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.59
371 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 318
374 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
375 TestStartStop/group/old-k8s-version/serial/Stop 13.35
376 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/old-k8s-version/serial/SecondStart 407.65
378 TestStartStop/group/embed-certs/serial/DeployApp 8.28
379 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
380 TestStartStop/group/embed-certs/serial/Stop 13.33
381 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
382 TestStartStop/group/embed-certs/serial/SecondStart 299.61
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
386 TestStartStop/group/no-preload/serial/Pause 2.51
388 TestStartStop/group/newest-cni/serial/FirstStart 61.09
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.76
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.82
395 TestStartStop/group/newest-cni/serial/Stop 7.52
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
397 TestStartStop/group/newest-cni/serial/SecondStart 37.34
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
401 TestStartStop/group/newest-cni/serial/Pause 2.3
402 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
404 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
405 TestStartStop/group/embed-certs/serial/Pause 2.4
406 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
409 TestStartStop/group/old-k8s-version/serial/Pause 2.25
x
+
TestDownloadOnly/v1.20.0/json-events (8.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-622108 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-622108 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.404075115s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-622108
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-622108: exit status 85 (59.47727ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-622108 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC |          |
	|         | -p download-only-622108        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 19:25:24
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 19:25:24.915672   12443 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:25:24.915800   12443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:25:24.915837   12443 out.go:358] Setting ErrFile to fd 2...
	I0904 19:25:24.915863   12443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:25:24.916290   12443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	W0904 19:25:24.916423   12443 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19575-5257/.minikube/config/config.json: open /home/jenkins/minikube-integration/19575-5257/.minikube/config/config.json: no such file or directory
	I0904 19:25:24.916992   12443 out.go:352] Setting JSON to true
	I0904 19:25:24.917874   12443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":473,"bootTime":1725477452,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 19:25:24.917934   12443 start.go:139] virtualization: kvm guest
	I0904 19:25:24.920135   12443 out.go:97] [download-only-622108] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0904 19:25:24.920235   12443 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19575-5257/.minikube/cache/preloaded-tarball: no such file or directory
	I0904 19:25:24.920285   12443 notify.go:220] Checking for updates...
	I0904 19:25:24.921482   12443 out.go:169] MINIKUBE_LOCATION=19575
	I0904 19:25:24.922768   12443 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 19:25:24.923913   12443 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	I0904 19:25:24.925283   12443 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:25:24.926632   12443 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0904 19:25:24.928805   12443 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0904 19:25:24.929005   12443 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 19:25:25.027563   12443 out.go:97] Using the kvm2 driver based on user configuration
	I0904 19:25:25.027595   12443 start.go:297] selected driver: kvm2
	I0904 19:25:25.027604   12443 start.go:901] validating driver "kvm2" against <nil>
	I0904 19:25:25.027929   12443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 19:25:25.028067   12443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19575-5257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0904 19:25:25.042708   12443 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0904 19:25:25.042766   12443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0904 19:25:25.043339   12443 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0904 19:25:25.043497   12443 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0904 19:25:25.043554   12443 cni.go:84] Creating CNI manager for ""
	I0904 19:25:25.043570   12443 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0904 19:25:25.043619   12443 start.go:340] cluster config:
	{Name:download-only-622108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-622108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 19:25:25.043798   12443 iso.go:125] acquiring lock: {Name:mke56ad6fec9dae1744ebaac12ff812ec06347d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0904 19:25:25.045925   12443 out.go:97] Downloading VM boot image ...
	I0904 19:25:25.045961   12443 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19575-5257/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0904 19:25:28.237886   12443 out.go:97] Starting "download-only-622108" primary control-plane node in "download-only-622108" cluster
	I0904 19:25:28.237910   12443 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 19:25:28.264070   12443 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0904 19:25:28.264099   12443 cache.go:56] Caching tarball of preloaded images
	I0904 19:25:28.264257   12443 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0904 19:25:28.266096   12443 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0904 19:25:28.266112   12443 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0904 19:25:28.293852   12443 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19575-5257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-622108 host does not exist
	  To start a cluster, run: "minikube start -p download-only-622108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-622108
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-144619 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-144619 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 : (4.974786585s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-144619
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-144619: exit status 85 (58.38434ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-622108 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC |                     |
	|         | -p download-only-622108        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC | 04 Sep 24 19:25 UTC |
	| delete  | -p download-only-622108        | download-only-622108 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC | 04 Sep 24 19:25 UTC |
	| start   | -o=json --download-only        | download-only-144619 | jenkins | v1.34.0 | 04 Sep 24 19:25 UTC |                     |
	|         | -p download-only-144619        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/04 19:25:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0904 19:25:33.645636   12635 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:25:33.646070   12635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:25:33.646129   12635 out.go:358] Setting ErrFile to fd 2...
	I0904 19:25:33.646146   12635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:25:33.646602   12635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 19:25:33.647446   12635 out.go:352] Setting JSON to true
	I0904 19:25:33.648388   12635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":482,"bootTime":1725477452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 19:25:33.648453   12635 start.go:139] virtualization: kvm guest
	I0904 19:25:33.650494   12635 out.go:97] [download-only-144619] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 19:25:33.650647   12635 notify.go:220] Checking for updates...
	I0904 19:25:33.652233   12635 out.go:169] MINIKUBE_LOCATION=19575
	I0904 19:25:33.653884   12635 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 19:25:33.655563   12635 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	I0904 19:25:33.656964   12635 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:25:33.658433   12635 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-144619 host does not exist
	  To start a cluster, run: "minikube start -p download-only-144619"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-144619
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-787676 --alsologtostderr --binary-mirror http://127.0.0.1:41895 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-787676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-787676
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (124.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-125580 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-125580 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m3.553861374s)
helpers_test.go:175: Cleaning up "offline-docker-125580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-125580
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-125580: (1.034311308s)
--- PASS: TestOffline (124.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-586464
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-586464: exit status 85 (49.156072ms)

                                                
                                                
-- stdout --
	* Profile "addons-586464" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-586464"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-586464
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-586464: exit status 85 (50.76984ms)

                                                
                                                
-- stdout --
	* Profile "addons-586464" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-586464"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (220.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-586464 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-586464 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m40.439950287s)
--- PASS: TestAddons/Setup (220.44s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 18.141829ms
addons_test.go:905: volcano-admission stabilized in 18.230468ms
addons_test.go:897: volcano-scheduler stabilized in 18.315738ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-sjxlq" [f74b8c93-4c7c-4b31-bf62-ed6715298f12] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003576279s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-2k8pd" [e6ab77b2-d7b5-4922-8c4d-d956e0b0a470] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003844746s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-d69t4" [4ce76091-380f-4c5e-b27d-d98d8d099a64] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004205138s
addons_test.go:932: (dbg) Run:  kubectl --context addons-586464 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-586464 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-586464 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ee801973-7b2e-468c-ab56-7fef1f4efa54] Pending
helpers_test.go:344: "test-job-nginx-0" [ee801973-7b2e-468c-ab56-7fef1f4efa54] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ee801973-7b2e-468c-ab56-7fef1f4efa54] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 17.004467436s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable volcano --alsologtostderr -v=1: (10.533310673s)
--- PASS: TestAddons/serial/Volcano (42.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-586464 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-586464 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-586464 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-586464 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-586464 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [57db9c2a-33ff-452e-abd8-c8a004497bba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [57db9c2a-33ff-452e-abd8-c8a004497bba] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004128607s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-586464 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.55
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable ingress-dns --alsologtostderr -v=1: (2.024259002s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable ingress --alsologtostderr -v=1: (7.825285029s)
--- PASS: TestAddons/parallel/Ingress (22.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-njnzz" [accd0fbb-db93-4af2-a35f-f74c2348a67d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004190702s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-586464
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-586464: (5.705946467s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.576052ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jtggb" [0f496763-5e7a-440e-8fb3-5f15aa8ff867] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00612407s
addons_test.go:417: (dbg) Run:  kubectl --context addons-586464 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.87s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.988111ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-gnf8s" [160e65ec-e4d0-4b19-baaa-862e4eabf268] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00444387s
addons_test.go:475: (dbg) Run:  kubectl --context addons-586464 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-586464 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.709360954s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.338177ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-586464 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-586464 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b1f2e940-7cb0-430c-a129-d0a874c49bac] Pending
helpers_test.go:344: "task-pv-pod" [b1f2e940-7cb0-430c-a129-d0a874c49bac] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b1f2e940-7cb0-430c-a129-d0a874c49bac] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004996066s
addons_test.go:590: (dbg) Run:  kubectl --context addons-586464 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-586464 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-586464 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-586464 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-586464 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-586464 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-586464 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0233381-bf86-4059-8f38-8aa7b17d0b7a] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0233381-bf86-4059-8f38-8aa7b17d0b7a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0233381-bf86-4059-8f38-8aa7b17d0b7a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004663384s
addons_test.go:632: (dbg) Run:  kubectl --context addons-586464 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-586464 delete pod task-pv-pod-restore: (1.418756566s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-586464 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-586464 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.721949301s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-586464 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-br2jp" [dc3b8c5e-7474-4e6e-9aec-4f82de8fb545] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-br2jp" [dc3b8c5e-7474-4e6e-9aec-4f82de8fb545] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-br2jp" [dc3b8c5e-7474-4e6e-9aec-4f82de8fb545] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-br2jp" [dc3b8c5e-7474-4e6e-9aec-4f82de8fb545] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004435378s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable headlamp --alsologtostderr -v=1: (5.66511585s)
--- PASS: TestAddons/parallel/Headlamp (19.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-mjrr6" [115db9dc-8908-40b8-8ff0-b7a80010d3a7] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003901628s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-586464
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-586464 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-586464 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [27efda99-769c-4a9f-b675-24df0d00bf37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [27efda99-769c-4a9f-b675-24df0d00bf37] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [27efda99-769c-4a9f-b675-24df0d00bf37] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003783831s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-586464 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 ssh "cat /opt/local-path-provisioner/pvc-fce74acb-36f4-4b36-9f36-2a553b5bb45c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-586464 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-586464 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.481741588s)
--- PASS: TestAddons/parallel/LocalPath (54.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lrw9n" [70f69048-7703-4d70-9f03-60264762e688] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004138099s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-586464
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rprsx" [55be4942-b14b-45af-9db6-de7b592367a7] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004141616s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-586464 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-586464 addons disable yakd --alsologtostderr -v=1: (5.645867719s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.56s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-586464
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-586464: (13.293230388s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-586464
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-586464
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-586464
--- PASS: TestAddons/StoppedEnableDisable (13.56s)

                                                
                                    
x
+
TestCertOptions (68.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-179929 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-179929 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m7.38649155s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-179929 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-179929 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-179929 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-179929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-179929
--- PASS: TestCertOptions (68.81s)

                                                
                                    
x
+
TestCertExpiration (295.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-693908 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0904 20:29:20.214665   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:29:22.497834   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-693908 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m14.575409648s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-693908 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-693908 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (40.308493367s)
helpers_test.go:175: Cleaning up "cert-expiration-693908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-693908
E0904 20:33:59.413958   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestCertExpiration (295.77s)

                                                
                                    
x
+
TestDockerFlags (58.59s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-420863 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-420863 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (56.846944279s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-420863 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-420863 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-420863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-420863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-420863: (1.262710121s)
--- PASS: TestDockerFlags (58.59s)

                                                
                                    
x
+
TestForceSystemdFlag (76.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-985915 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-985915 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m14.742626497s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-985915 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-985915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-985915
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-985915: (1.543664862s)
--- PASS: TestForceSystemdFlag (76.55s)

                                                
                                    
x
+
TestForceSystemdEnv (76.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-422510 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-422510 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m15.391501814s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-422510 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-422510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-422510
--- PASS: TestForceSystemdEnv (76.58s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.27s)

                                                
                                    
x
+
TestErrorSpam/setup (46.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-636103 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-636103 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-636103 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-636103 --driver=kvm2 : (46.248131855s)
--- PASS: TestErrorSpam/setup (46.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 pause
--- PASS: TestErrorSpam/pause (1.18s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (16.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 stop: (12.408063646s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 stop: (2.023566533s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-636103 --log_dir /tmp/nospam-636103 stop: (1.702298728s)
--- PASS: TestErrorSpam/stop (16.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19575-5257/.minikube/files/etc/test/nested/copy/12431/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (94.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-566210 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-566210 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m34.968855795s)
--- PASS: TestFunctional/serial/StartWithProxy (94.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-566210 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-566210 --alsologtostderr -v=8: (40.412712401s)
functional_test.go:663: soft start took 40.413480148s for "functional-566210" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-566210 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-566210 /tmp/TestFunctionalserialCacheCmdcacheadd_local985736466/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cache add minikube-local-cache-test:functional-566210
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cache delete minikube-local-cache-test:functional-566210
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-566210
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.04211ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 kubectl -- --context functional-566210 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-566210 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-566210 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-566210 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.867679161s)
functional_test.go:761: restart took 41.867799852s for "functional-566210" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-566210 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 logs
--- PASS: TestFunctional/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 logs --file /tmp/TestFunctionalserialLogsFileCmd1761722151/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-566210 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-566210
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-566210: exit status 115 (266.619982ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.44:32055 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-566210 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-566210 delete -f testdata/invalidsvc.yaml: (2.428122495s)
--- PASS: TestFunctional/serial/InvalidService (5.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 config get cpus: exit status 14 (60.023184ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 config get cpus: exit status 14 (61.477785ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-566210 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-566210 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22562: os: process already finished
E0904 19:44:40.710042   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (15.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-566210 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
E0904 19:44:20.860566   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-566210 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (127.522574ms)

                                                
                                                
-- stdout --
	* [functional-566210] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 19:44:20.848200   22212 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:44:20.848428   22212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:44:20.848436   22212 out.go:358] Setting ErrFile to fd 2...
	I0904 19:44:20.848440   22212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:44:20.848602   22212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 19:44:20.849091   22212 out.go:352] Setting JSON to false
	I0904 19:44:20.850012   22212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1609,"bootTime":1725477452,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 19:44:20.850074   22212 start.go:139] virtualization: kvm guest
	I0904 19:44:20.852361   22212 out.go:177] * [functional-566210] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0904 19:44:20.853809   22212 notify.go:220] Checking for updates...
	I0904 19:44:20.853819   22212 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 19:44:20.855341   22212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 19:44:20.856762   22212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	I0904 19:44:20.858270   22212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:44:20.859590   22212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 19:44:20.860887   22212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 19:44:20.863186   22212 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:44:20.863638   22212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:44:20.863685   22212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:44:20.878559   22212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0904 19:44:20.878974   22212 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:44:20.879490   22212 main.go:141] libmachine: Using API Version  1
	I0904 19:44:20.879511   22212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:44:20.879795   22212 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:44:20.879961   22212 main.go:141] libmachine: (functional-566210) Calling .DriverName
	I0904 19:44:20.880199   22212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 19:44:20.880511   22212 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:44:20.880554   22212 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:44:20.895893   22212 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0904 19:44:20.896321   22212 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:44:20.896808   22212 main.go:141] libmachine: Using API Version  1
	I0904 19:44:20.896831   22212 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:44:20.897172   22212 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:44:20.897396   22212 main.go:141] libmachine: (functional-566210) Calling .DriverName
	I0904 19:44:20.930106   22212 out.go:177] * Using the kvm2 driver based on existing profile
	I0904 19:44:20.931377   22212 start.go:297] selected driver: kvm2
	I0904 19:44:20.931393   22212 start.go:901] validating driver "kvm2" against &{Name:functional-566210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-566210 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 19:44:20.931520   22212 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 19:44:20.933518   22212 out.go:201] 
	W0904 19:44:20.934757   22212 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0904 19:44:20.935949   22212 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-566210 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-566210 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-566210 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (134.417364ms)

                                                
                                                
-- stdout --
	* [functional-566210] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 19:44:21.112888   22283 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:44:21.113026   22283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:44:21.113036   22283 out.go:358] Setting ErrFile to fd 2...
	I0904 19:44:21.113043   22283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:44:21.113343   22283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 19:44:21.113868   22283 out.go:352] Setting JSON to false
	I0904 19:44:21.114816   22283 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1609,"bootTime":1725477452,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0904 19:44:21.114873   22283 start.go:139] virtualization: kvm guest
	I0904 19:44:21.117234   22283 out.go:177] * [functional-566210] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0904 19:44:21.118737   22283 out.go:177]   - MINIKUBE_LOCATION=19575
	I0904 19:44:21.118780   22283 notify.go:220] Checking for updates...
	I0904 19:44:21.121441   22283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 19:44:21.122918   22283 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	I0904 19:44:21.124194   22283 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	I0904 19:44:21.125410   22283 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0904 19:44:21.126640   22283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 19:44:21.128161   22283 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:44:21.128541   22283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:44:21.128596   22283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:44:21.143718   22283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I0904 19:44:21.144160   22283 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:44:21.144681   22283 main.go:141] libmachine: Using API Version  1
	I0904 19:44:21.144710   22283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:44:21.145046   22283 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:44:21.145233   22283 main.go:141] libmachine: (functional-566210) Calling .DriverName
	I0904 19:44:21.145489   22283 driver.go:394] Setting default libvirt URI to qemu:///system
	I0904 19:44:21.145780   22283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:44:21.145813   22283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:44:21.160159   22283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I0904 19:44:21.160522   22283 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:44:21.161105   22283 main.go:141] libmachine: Using API Version  1
	I0904 19:44:21.161134   22283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:44:21.161424   22283 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:44:21.161593   22283 main.go:141] libmachine: (functional-566210) Calling .DriverName
	I0904 19:44:21.192838   22283 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0904 19:44:21.194105   22283 start.go:297] selected driver: kvm2
	I0904 19:44:21.194120   22283 start.go:901] validating driver "kvm2" against &{Name:functional-566210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-566210 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.44 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0904 19:44:21.194213   22283 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 19:44:21.196319   22283 out.go:201] 
	W0904 19:44:21.197440   22283 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0904 19:44:21.198654   22283 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-566210 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-566210 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vh2zf" [562e85d8-7710-4c91-a22d-ceb19ffdb32e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vh2zf" [562e85d8-7710-4c91-a22d-ceb19ffdb32e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.004579303s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.44:32765
functional_test.go:1675: http://192.168.39.44:32765: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-vh2zf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.44:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.44:32765
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ff93784f-9b5b-47d8-b06b-90624570f2bc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004652972s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-566210 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-566210 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-566210 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-566210 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-566210 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7b0ecb7-1a12-4ace-b085-bd50bd4a6986] Pending
helpers_test.go:344: "sp-pod" [c7b0ecb7-1a12-4ace-b085-bd50bd4a6986] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c7b0ecb7-1a12-4ace-b085-bd50bd4a6986] Running
E0904 19:44:20.213983   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:44:20.220828   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:44:20.232254   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:44:20.253710   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004679417s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-566210 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-566210 delete -f testdata/storage-provisioner/pod.yaml
E0904 19:44:25.346655   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-566210 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [48448f3f-ee87-4269-97e6-dfe858fdad7a] Pending
helpers_test.go:344: "sp-pod" [48448f3f-ee87-4269-97e6-dfe858fdad7a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [48448f3f-ee87-4269-97e6-dfe858fdad7a] Running
2024/09/04 19:44:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004626334s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-566210 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh -n functional-566210 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cp functional-566210:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd45796600/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh -n functional-566210 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh -n functional-566210 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-566210 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-d25mr" [cbf91ca8-ae14-4a07-b092-73a1dbcf6c0e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-d25mr" [cbf91ca8-ae14-4a07-b092-73a1dbcf6c0e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.00432484s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;": exit status 1 (255.880926ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;": exit status 1 (174.219763ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;": exit status 1 (303.059428ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-566210 exec mysql-6cdb49bbb-d25mr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12431/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /etc/test/nested/copy/12431/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12431.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /etc/ssl/certs/12431.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12431.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /usr/share/ca-certificates/12431.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/124312.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /etc/ssl/certs/124312.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/124312.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /usr/share/ca-certificates/124312.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-566210 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh "sudo systemctl is-active crio": exit status 1 (225.396416ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-566210 docker-env) && out/minikube-linux-amd64 status -p functional-566210"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-566210 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 version -o=json --components
E0904 19:44:22.784590   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-566210 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-566210
docker.io/kicbase/echo-server:functional-566210
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-566210 image ls --format short --alsologtostderr:
I0904 19:44:23.221504   22572 out.go:345] Setting OutFile to fd 1 ...
I0904 19:44:23.221758   22572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:23.221768   22572 out.go:358] Setting ErrFile to fd 2...
I0904 19:44:23.221772   22572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:23.222007   22572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
I0904 19:44:23.222580   22572 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:23.222691   22572 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:23.223071   22572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:23.223124   22572 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:23.243417   22572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
I0904 19:44:23.243886   22572 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:23.244498   22572 main.go:141] libmachine: Using API Version  1
I0904 19:44:23.244524   22572 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:23.244931   22572 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:23.245108   22572 main.go:141] libmachine: (functional-566210) Calling .GetState
I0904 19:44:23.247195   22572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:23.247243   22572 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:23.262153   22572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
I0904 19:44:23.262566   22572 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:23.263014   22572 main.go:141] libmachine: Using API Version  1
I0904 19:44:23.263036   22572 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:23.263341   22572 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:23.263553   22572 main.go:141] libmachine: (functional-566210) Calling .DriverName
I0904 19:44:23.263749   22572 ssh_runner.go:195] Run: systemctl --version
I0904 19:44:23.263778   22572 main.go:141] libmachine: (functional-566210) Calling .GetSSHHostname
I0904 19:44:23.266552   22572 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:23.266906   22572 main.go:141] libmachine: (functional-566210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:86:9e", ip: ""} in network mk-functional-566210: {Iface:virbr1 ExpiryTime:2024-09-04 20:40:54 +0000 UTC Type:0 Mac:52:54:00:2c:86:9e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:functional-566210 Clientid:01:52:54:00:2c:86:9e}
I0904 19:44:23.266944   22572 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined IP address 192.168.39.44 and MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:23.267097   22572 main.go:141] libmachine: (functional-566210) Calling .GetSSHPort
I0904 19:44:23.267288   22572 main.go:141] libmachine: (functional-566210) Calling .GetSSHKeyPath
I0904 19:44:23.267444   22572 main.go:141] libmachine: (functional-566210) Calling .GetSSHUsername
I0904 19:44:23.267602   22572 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/functional-566210/id_rsa Username:docker}
I0904 19:44:23.347479   22572 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0904 19:44:23.388664   22572 main.go:141] libmachine: Making call to close driver server
I0904 19:44:23.388679   22572 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:23.388947   22572 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:23.388967   22572 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:23.388972   22572 main.go:141] libmachine: (functional-566210) DBG | Closing plugin on server side
I0904 19:44:23.388981   22572 main.go:141] libmachine: Making call to close driver server
I0904 19:44:23.389021   22572 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:23.389277   22572 main.go:141] libmachine: (functional-566210) DBG | Closing plugin on server side
I0904 19:44:23.389280   22572 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:23.389315   22572 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-566210 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| docker.io/kicbase/echo-server               | functional-566210 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| localhost/my-image                          | functional-566210 | ec408698d6d16 | 1.24MB |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-566210 | 5e54e1c41fede | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-566210 image ls --format table --alsologtostderr:
I0904 19:44:27.224167   22778 out.go:345] Setting OutFile to fd 1 ...
I0904 19:44:27.224415   22778 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:27.224425   22778 out.go:358] Setting ErrFile to fd 2...
I0904 19:44:27.224429   22778 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:27.224649   22778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
I0904 19:44:27.225277   22778 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:27.225423   22778 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:27.225932   22778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:27.225991   22778 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:27.242385   22778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
I0904 19:44:27.242852   22778 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:27.243436   22778 main.go:141] libmachine: Using API Version  1
I0904 19:44:27.243464   22778 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:27.243828   22778 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:27.244071   22778 main.go:141] libmachine: (functional-566210) Calling .GetState
I0904 19:44:27.246326   22778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:27.246366   22778 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:27.265659   22778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42011
I0904 19:44:27.266204   22778 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:27.266792   22778 main.go:141] libmachine: Using API Version  1
I0904 19:44:27.266828   22778 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:27.267201   22778 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:27.267423   22778 main.go:141] libmachine: (functional-566210) Calling .DriverName
I0904 19:44:27.267652   22778 ssh_runner.go:195] Run: systemctl --version
I0904 19:44:27.267684   22778 main.go:141] libmachine: (functional-566210) Calling .GetSSHHostname
I0904 19:44:27.273501   22778 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:27.273798   22778 main.go:141] libmachine: (functional-566210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:86:9e", ip: ""} in network mk-functional-566210: {Iface:virbr1 ExpiryTime:2024-09-04 20:40:54 +0000 UTC Type:0 Mac:52:54:00:2c:86:9e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:functional-566210 Clientid:01:52:54:00:2c:86:9e}
I0904 19:44:27.273831   22778 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined IP address 192.168.39.44 and MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:27.274019   22778 main.go:141] libmachine: (functional-566210) Calling .GetSSHPort
I0904 19:44:27.274219   22778 main.go:141] libmachine: (functional-566210) Calling .GetSSHKeyPath
I0904 19:44:27.274376   22778 main.go:141] libmachine: (functional-566210) Calling .GetSSHUsername
I0904 19:44:27.274498   22778 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/functional-566210/id_rsa Username:docker}
I0904 19:44:27.359161   22778 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0904 19:44:27.400869   22778 main.go:141] libmachine: Making call to close driver server
I0904 19:44:27.400884   22778 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:27.401162   22778 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:27.401180   22778 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:27.401189   22778 main.go:141] libmachine: Making call to close driver server
I0904 19:44:27.401198   22778 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:27.401430   22778 main.go:141] libmachine: (functional-566210) DBG | Closing plugin on server side
I0904 19:44:27.401433   22778 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:27.401447   22778 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-566210 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-566210"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ec408698d6d167180cd6ddb57d0527ba50937cafc60d6a0e970d363ff90e60db","repoDigests":[],"repoTags":["localhost/my-image:functional-566210"],"size":"1240000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","re
poDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"}
,{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"5e54e1c41fede59a742761b1b4fd70d6ccd0860e1ae8c09b7756b4ebb762bade","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-566210"],"size":"30"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-566210 image ls --format json --alsologtostderr:
I0904 19:44:27.000760   22745 out.go:345] Setting OutFile to fd 1 ...
I0904 19:44:27.000891   22745 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:27.000901   22745 out.go:358] Setting ErrFile to fd 2...
I0904 19:44:27.000907   22745 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:27.001075   22745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
I0904 19:44:27.001643   22745 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:27.001755   22745 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:27.002133   22745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:27.002195   22745 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:27.016862   22745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
I0904 19:44:27.017291   22745 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:27.017922   22745 main.go:141] libmachine: Using API Version  1
I0904 19:44:27.017948   22745 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:27.018294   22745 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:27.018483   22745 main.go:141] libmachine: (functional-566210) Calling .GetState
I0904 19:44:27.020314   22745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:27.020349   22745 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:27.034886   22745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
I0904 19:44:27.035309   22745 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:27.035771   22745 main.go:141] libmachine: Using API Version  1
I0904 19:44:27.035795   22745 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:27.036070   22745 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:27.036244   22745 main.go:141] libmachine: (functional-566210) Calling .DriverName
I0904 19:44:27.036421   22745 ssh_runner.go:195] Run: systemctl --version
I0904 19:44:27.036445   22745 main.go:141] libmachine: (functional-566210) Calling .GetSSHHostname
I0904 19:44:27.039095   22745 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:27.039506   22745 main.go:141] libmachine: (functional-566210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:86:9e", ip: ""} in network mk-functional-566210: {Iface:virbr1 ExpiryTime:2024-09-04 20:40:54 +0000 UTC Type:0 Mac:52:54:00:2c:86:9e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:functional-566210 Clientid:01:52:54:00:2c:86:9e}
I0904 19:44:27.039549   22745 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined IP address 192.168.39.44 and MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:27.039659   22745 main.go:141] libmachine: (functional-566210) Calling .GetSSHPort
I0904 19:44:27.039824   22745 main.go:141] libmachine: (functional-566210) Calling .GetSSHKeyPath
I0904 19:44:27.039977   22745 main.go:141] libmachine: (functional-566210) Calling .GetSSHUsername
I0904 19:44:27.040123   22745 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/functional-566210/id_rsa Username:docker}
I0904 19:44:27.120479   22745 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0904 19:44:27.173020   22745 main.go:141] libmachine: Making call to close driver server
I0904 19:44:27.173036   22745 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:27.173293   22745 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:27.173309   22745 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:27.173315   22745 main.go:141] libmachine: (functional-566210) DBG | Closing plugin on server side
I0904 19:44:27.173325   22745 main.go:141] libmachine: Making call to close driver server
I0904 19:44:27.173334   22745 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:27.173590   22745 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:27.173608   22745 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:27.173619   22745 main.go:141] libmachine: (functional-566210) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-566210 image ls --format yaml --alsologtostderr:
- id: 5e54e1c41fede59a742761b1b4fd70d6ccd0860e1ae8c09b7756b4ebb762bade
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-566210
size: "30"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-566210
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-566210 image ls --format yaml --alsologtostderr:
I0904 19:44:23.434610   22596 out.go:345] Setting OutFile to fd 1 ...
I0904 19:44:23.434835   22596 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:23.434844   22596 out.go:358] Setting ErrFile to fd 2...
I0904 19:44:23.434848   22596 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:23.435042   22596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
I0904 19:44:23.436354   22596 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:23.436828   22596 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:23.437241   22596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:23.437282   22596 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:23.451812   22596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
I0904 19:44:23.452271   22596 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:23.452819   22596 main.go:141] libmachine: Using API Version  1
I0904 19:44:23.452843   22596 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:23.453202   22596 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:23.453445   22596 main.go:141] libmachine: (functional-566210) Calling .GetState
I0904 19:44:23.455298   22596 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:23.455339   22596 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:23.470023   22596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
I0904 19:44:23.470487   22596 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:23.471044   22596 main.go:141] libmachine: Using API Version  1
I0904 19:44:23.471065   22596 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:23.471436   22596 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:23.471619   22596 main.go:141] libmachine: (functional-566210) Calling .DriverName
I0904 19:44:23.471830   22596 ssh_runner.go:195] Run: systemctl --version
I0904 19:44:23.471859   22596 main.go:141] libmachine: (functional-566210) Calling .GetSSHHostname
I0904 19:44:23.474671   22596 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:23.475136   22596 main.go:141] libmachine: (functional-566210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:86:9e", ip: ""} in network mk-functional-566210: {Iface:virbr1 ExpiryTime:2024-09-04 20:40:54 +0000 UTC Type:0 Mac:52:54:00:2c:86:9e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:functional-566210 Clientid:01:52:54:00:2c:86:9e}
I0904 19:44:23.475159   22596 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined IP address 192.168.39.44 and MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:23.475321   22596 main.go:141] libmachine: (functional-566210) Calling .GetSSHPort
I0904 19:44:23.475525   22596 main.go:141] libmachine: (functional-566210) Calling .GetSSHKeyPath
I0904 19:44:23.475682   22596 main.go:141] libmachine: (functional-566210) Calling .GetSSHUsername
I0904 19:44:23.475828   22596 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/functional-566210/id_rsa Username:docker}
I0904 19:44:23.572005   22596 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0904 19:44:23.594685   22596 main.go:141] libmachine: Making call to close driver server
I0904 19:44:23.594700   22596 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:23.594976   22596 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:23.594997   22596 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:23.595007   22596 main.go:141] libmachine: Making call to close driver server
I0904 19:44:23.595015   22596 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:23.595232   22596 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:23.595250   22596 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:23.595272   22596 main.go:141] libmachine: (functional-566210) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh pgrep buildkitd: exit status 1 (179.270908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image build -t localhost/my-image:functional-566210 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-566210 image build -t localhost/my-image:functional-566210 testdata/build --alsologtostderr: (2.967182649s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-566210 image build -t localhost/my-image:functional-566210 testdata/build --alsologtostderr:
I0904 19:44:23.819108   22650 out.go:345] Setting OutFile to fd 1 ...
I0904 19:44:23.819384   22650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:23.819394   22650 out.go:358] Setting ErrFile to fd 2...
I0904 19:44:23.819398   22650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0904 19:44:23.819548   22650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
I0904 19:44:23.820044   22650 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:23.820554   22650 config.go:182] Loaded profile config "functional-566210": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0904 19:44:23.820951   22650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:23.820997   22650 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:23.835485   22650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
I0904 19:44:23.835888   22650 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:23.836393   22650 main.go:141] libmachine: Using API Version  1
I0904 19:44:23.836411   22650 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:23.836745   22650 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:23.836908   22650 main.go:141] libmachine: (functional-566210) Calling .GetState
I0904 19:44:23.838550   22650 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0904 19:44:23.838582   22650 main.go:141] libmachine: Launching plugin server for driver kvm2
I0904 19:44:23.852667   22650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
I0904 19:44:23.853037   22650 main.go:141] libmachine: () Calling .GetVersion
I0904 19:44:23.853532   22650 main.go:141] libmachine: Using API Version  1
I0904 19:44:23.853553   22650 main.go:141] libmachine: () Calling .SetConfigRaw
I0904 19:44:23.853930   22650 main.go:141] libmachine: () Calling .GetMachineName
I0904 19:44:23.854244   22650 main.go:141] libmachine: (functional-566210) Calling .DriverName
I0904 19:44:23.854482   22650 ssh_runner.go:195] Run: systemctl --version
I0904 19:44:23.854523   22650 main.go:141] libmachine: (functional-566210) Calling .GetSSHHostname
I0904 19:44:23.857103   22650 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:23.857557   22650 main.go:141] libmachine: (functional-566210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:86:9e", ip: ""} in network mk-functional-566210: {Iface:virbr1 ExpiryTime:2024-09-04 20:40:54 +0000 UTC Type:0 Mac:52:54:00:2c:86:9e Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:functional-566210 Clientid:01:52:54:00:2c:86:9e}
I0904 19:44:23.857590   22650 main.go:141] libmachine: (functional-566210) DBG | domain functional-566210 has defined IP address 192.168.39.44 and MAC address 52:54:00:2c:86:9e in network mk-functional-566210
I0904 19:44:23.857692   22650 main.go:141] libmachine: (functional-566210) Calling .GetSSHPort
I0904 19:44:23.857846   22650 main.go:141] libmachine: (functional-566210) Calling .GetSSHKeyPath
I0904 19:44:23.857977   22650 main.go:141] libmachine: (functional-566210) Calling .GetSSHUsername
I0904 19:44:23.858107   22650 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/functional-566210/id_rsa Username:docker}
I0904 19:44:23.943546   22650 build_images.go:161] Building image from path: /tmp/build.1524352576.tar
I0904 19:44:23.943600   22650 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0904 19:44:23.952982   22650 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1524352576.tar
I0904 19:44:23.956983   22650 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1524352576.tar: stat -c "%s %y" /var/lib/minikube/build/build.1524352576.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1524352576.tar': No such file or directory
I0904 19:44:23.957013   22650 ssh_runner.go:362] scp /tmp/build.1524352576.tar --> /var/lib/minikube/build/build.1524352576.tar (3072 bytes)
I0904 19:44:23.982204   22650 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1524352576
I0904 19:44:23.991729   22650 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1524352576 -xf /var/lib/minikube/build/build.1524352576.tar
I0904 19:44:24.001644   22650 docker.go:360] Building image: /var/lib/minikube/build/build.1524352576
I0904 19:44:24.001710   22650 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-566210 /var/lib/minikube/build/build.1524352576
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:ec408698d6d167180cd6ddb57d0527ba50937cafc60d6a0e970d363ff90e60db done
#8 naming to localhost/my-image:functional-566210 0.0s done
#8 DONE 0.1s
I0904 19:44:26.710819   22650 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-566210 /var/lib/minikube/build/build.1524352576: (2.709083041s)
I0904 19:44:26.710886   22650 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1524352576
I0904 19:44:26.725755   22650 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1524352576.tar
I0904 19:44:26.741801   22650 build_images.go:217] Built localhost/my-image:functional-566210 from /tmp/build.1524352576.tar
I0904 19:44:26.741833   22650 build_images.go:133] succeeded building to: functional-566210
I0904 19:44:26.741838   22650 build_images.go:134] failed building to: 
I0904 19:44:26.741883   22650 main.go:141] libmachine: Making call to close driver server
I0904 19:44:26.741894   22650 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:26.742176   22650 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:26.742193   22650 main.go:141] libmachine: Making call to close connection to plugin binary
I0904 19:44:26.742208   22650 main.go:141] libmachine: Making call to close driver server
I0904 19:44:26.742217   22650 main.go:141] libmachine: (functional-566210) Calling .Close
I0904 19:44:26.742475   22650 main.go:141] libmachine: Successfully made call to close driver server
I0904 19:44:26.742492   22650 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.517956431s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-566210
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (26.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-566210 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-566210 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-lgrjw" [0336e77a-13dc-473d-b2d7-f3f9389eee7d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-lgrjw" [0336e77a-13dc-473d-b2d7-f3f9389eee7d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.004147554s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (26.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image load --daemon kicbase/echo-server:functional-566210 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image load --daemon kicbase/echo-server:functional-566210 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-566210
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image load --daemon kicbase/echo-server:functional-566210 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image save kicbase/echo-server:functional-566210 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image rm kicbase/echo-server:functional-566210 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-566210
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 image save --daemon kicbase/echo-server:functional-566210 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-566210
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 service list -o json
functional_test.go:1494: Took "424.195652ms" to run "out/minikube-linux-amd64 -p functional-566210 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.44:32229
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
E0904 19:44:20.296049   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1315: Took "209.899476ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "45.781989ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 service hello-node --url
E0904 19:44:20.377587   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.44:32229
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
E0904 19:44:20.539255   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "254.413011ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.614967ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdany-port1167026142/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725479060596621652" to /tmp/TestFunctionalparallelMountCmdany-port1167026142/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725479060596621652" to /tmp/TestFunctionalparallelMountCmdany-port1167026142/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725479060596621652" to /tmp/TestFunctionalparallelMountCmdany-port1167026142/001/test-1725479060596621652
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.027349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T /mount-9p | grep 9p"
E0904 19:44:21.502358   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  4 19:44 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  4 19:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  4 19:44 test-1725479060596621652
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh cat /mount-9p/test-1725479060596621652
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-566210 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b4ca586a-ae20-49ce-b9e7-609579bccf76] Pending
helpers_test.go:344: "busybox-mount" [b4ca586a-ae20-49ce-b9e7-609579bccf76] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b4ca586a-ae20-49ce-b9e7-609579bccf76] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b4ca586a-ae20-49ce-b9e7-609579bccf76] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002817403s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-566210 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdany-port1167026142/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdspecific-port2845139062/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.234853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdspecific-port2845139062/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh "sudo umount -f /mount-9p": exit status 1 (192.011031ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-566210 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdspecific-port2845139062/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1642254843/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1642254843/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1642254843/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T" /mount1: exit status 1 (238.202861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0904 19:44:30.468345   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-566210 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-566210 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1642254843/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1642254843/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-566210 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1642254843/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-566210
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-566210
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-566210
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (188.91s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-769621 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-769621 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (47.791495383s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-769621 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0904 20:28:41.517046   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:41.523779   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:41.536017   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:41.557832   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:41.599335   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:41.681507   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:41.842833   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-769621 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.596060614s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-769621 addons enable gvisor
E0904 20:28:51.774069   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:52.232350   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-769621 addons enable gvisor: (3.698686683s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [59f18430-12e2-4821-8713-f95a10117fb0] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.276659294s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-769621 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:73: (dbg) Done: kubectl --context gvisor-769621 replace --force -f testdata/nginx-gvisor.yaml: (1.009967348s)
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [2994fe5e-7993-4073-b516-5fd100f6905f] Pending
E0904 20:29:02.015516   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-gvisor" [2994fe5e-7993-4073-b516-5fd100f6905f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [2994fe5e-7993-4073-b516-5fd100f6905f] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 39.004605882s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-769621
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-769621: (7.325817115s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-769621 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0904 20:30:03.459502   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-769621 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (49.136226801s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [59f18430-12e2-4821-8713-f95a10117fb0] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004562577s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [2994fe5e-7993-4073-b516-5fd100f6905f] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.005236529s
helpers_test.go:175: Cleaning up "gvisor-769621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-769621
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-769621: (1.053845713s)
--- PASS: TestGvisorAddon (188.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-316015 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0904 19:45:01.192107   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:45:42.154333   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:47:04.075758   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-316015 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m35.476306059s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (216.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-316015 -- rollout status deployment/busybox: (4.184213435s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-788cp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-kppjc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-qklm7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-788cp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-kppjc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-qklm7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-788cp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-kppjc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-qklm7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-788cp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-788cp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-kppjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-kppjc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-qklm7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-316015 -- exec busybox-7dff88458-qklm7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-316015 -v=7 --alsologtostderr
E0904 19:48:52.232892   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.239400   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.251670   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.273088   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.314546   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.395967   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.557225   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:52.879219   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:53.521248   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:54.802858   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:48:57.365129   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:49:02.487455   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:49:12.729324   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:49:20.214483   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-316015 -v=7 --alsologtostderr: (1m1.350154543s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-316015 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp testdata/cp-test.txt ha-316015:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2492004602/001/cp-test_ha-316015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015:/home/docker/cp-test.txt ha-316015-m02:/home/docker/cp-test_ha-316015_ha-316015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test_ha-316015_ha-316015-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015:/home/docker/cp-test.txt ha-316015-m03:/home/docker/cp-test_ha-316015_ha-316015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test_ha-316015_ha-316015-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015:/home/docker/cp-test.txt ha-316015-m04:/home/docker/cp-test_ha-316015_ha-316015-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test_ha-316015_ha-316015-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp testdata/cp-test.txt ha-316015-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2492004602/001/cp-test_ha-316015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test.txt"
E0904 19:49:33.210949   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m02:/home/docker/cp-test.txt ha-316015:/home/docker/cp-test_ha-316015-m02_ha-316015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test_ha-316015-m02_ha-316015.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m02:/home/docker/cp-test.txt ha-316015-m03:/home/docker/cp-test_ha-316015-m02_ha-316015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test_ha-316015-m02_ha-316015-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m02:/home/docker/cp-test.txt ha-316015-m04:/home/docker/cp-test_ha-316015-m02_ha-316015-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test_ha-316015-m02_ha-316015-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp testdata/cp-test.txt ha-316015-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2492004602/001/cp-test_ha-316015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m03:/home/docker/cp-test.txt ha-316015:/home/docker/cp-test_ha-316015-m03_ha-316015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test_ha-316015-m03_ha-316015.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m03:/home/docker/cp-test.txt ha-316015-m02:/home/docker/cp-test_ha-316015-m03_ha-316015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test_ha-316015-m03_ha-316015-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m03:/home/docker/cp-test.txt ha-316015-m04:/home/docker/cp-test_ha-316015-m03_ha-316015-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test_ha-316015-m03_ha-316015-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp testdata/cp-test.txt ha-316015-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2492004602/001/cp-test_ha-316015-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m04:/home/docker/cp-test.txt ha-316015:/home/docker/cp-test_ha-316015-m04_ha-316015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015 "sudo cat /home/docker/cp-test_ha-316015-m04_ha-316015.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m04:/home/docker/cp-test.txt ha-316015-m02:/home/docker/cp-test_ha-316015-m04_ha-316015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m02 "sudo cat /home/docker/cp-test_ha-316015-m04_ha-316015-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 cp ha-316015-m04:/home/docker/cp-test.txt ha-316015-m03:/home/docker/cp-test_ha-316015-m04_ha-316015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 ssh -n ha-316015-m03 "sudo cat /home/docker/cp-test_ha-316015-m04_ha-316015-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 node stop m02 -v=7 --alsologtostderr
E0904 19:49:47.917753   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-316015 node stop m02 -v=7 --alsologtostderr: (12.611848247s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr: exit status 7 (617.847535ms)

                                                
                                                
-- stdout --
	ha-316015
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-316015-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-316015-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-316015-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 19:49:53.976005   27603 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:49:53.976130   27603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:49:53.976140   27603 out.go:358] Setting ErrFile to fd 2...
	I0904 19:49:53.976144   27603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:49:53.976312   27603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 19:49:53.976493   27603 out.go:352] Setting JSON to false
	I0904 19:49:53.976523   27603 mustload.go:65] Loading cluster: ha-316015
	I0904 19:49:53.976625   27603 notify.go:220] Checking for updates...
	I0904 19:49:53.976936   27603 config.go:182] Loaded profile config "ha-316015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:49:53.976952   27603 status.go:255] checking status of ha-316015 ...
	I0904 19:49:53.977313   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:53.977393   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:53.996394   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I0904 19:49:53.996830   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:53.997378   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:53.997403   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:53.997834   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:53.998095   27603 main.go:141] libmachine: (ha-316015) Calling .GetState
	I0904 19:49:53.999649   27603 status.go:330] ha-316015 host status = "Running" (err=<nil>)
	I0904 19:49:53.999666   27603 host.go:66] Checking if "ha-316015" exists ...
	I0904 19:49:54.000069   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.000110   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.014693   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0904 19:49:54.015108   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.015602   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.015617   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.015896   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.016064   27603 main.go:141] libmachine: (ha-316015) Calling .GetIP
	I0904 19:49:54.019122   27603 main.go:141] libmachine: (ha-316015) DBG | domain ha-316015 has defined MAC address 52:54:00:e6:d4:6a in network mk-ha-316015
	I0904 19:49:54.019538   27603 main.go:141] libmachine: (ha-316015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:d4:6a", ip: ""} in network mk-ha-316015: {Iface:virbr1 ExpiryTime:2024-09-04 20:44:56 +0000 UTC Type:0 Mac:52:54:00:e6:d4:6a Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-316015 Clientid:01:52:54:00:e6:d4:6a}
	I0904 19:49:54.019572   27603 main.go:141] libmachine: (ha-316015) DBG | domain ha-316015 has defined IP address 192.168.39.232 and MAC address 52:54:00:e6:d4:6a in network mk-ha-316015
	I0904 19:49:54.019711   27603 host.go:66] Checking if "ha-316015" exists ...
	I0904 19:49:54.020070   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.020108   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.034399   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0904 19:49:54.034789   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.035274   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.035305   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.035600   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.035792   27603 main.go:141] libmachine: (ha-316015) Calling .DriverName
	I0904 19:49:54.035990   27603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 19:49:54.036016   27603 main.go:141] libmachine: (ha-316015) Calling .GetSSHHostname
	I0904 19:49:54.038559   27603 main.go:141] libmachine: (ha-316015) DBG | domain ha-316015 has defined MAC address 52:54:00:e6:d4:6a in network mk-ha-316015
	I0904 19:49:54.038949   27603 main.go:141] libmachine: (ha-316015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:d4:6a", ip: ""} in network mk-ha-316015: {Iface:virbr1 ExpiryTime:2024-09-04 20:44:56 +0000 UTC Type:0 Mac:52:54:00:e6:d4:6a Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-316015 Clientid:01:52:54:00:e6:d4:6a}
	I0904 19:49:54.038986   27603 main.go:141] libmachine: (ha-316015) DBG | domain ha-316015 has defined IP address 192.168.39.232 and MAC address 52:54:00:e6:d4:6a in network mk-ha-316015
	I0904 19:49:54.039112   27603 main.go:141] libmachine: (ha-316015) Calling .GetSSHPort
	I0904 19:49:54.039286   27603 main.go:141] libmachine: (ha-316015) Calling .GetSSHKeyPath
	I0904 19:49:54.039430   27603 main.go:141] libmachine: (ha-316015) Calling .GetSSHUsername
	I0904 19:49:54.039569   27603 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/ha-316015/id_rsa Username:docker}
	I0904 19:49:54.121776   27603 ssh_runner.go:195] Run: systemctl --version
	I0904 19:49:54.128548   27603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 19:49:54.145114   27603 kubeconfig.go:125] found "ha-316015" server: "https://192.168.39.254:8443"
	I0904 19:49:54.145156   27603 api_server.go:166] Checking apiserver status ...
	I0904 19:49:54.145207   27603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 19:49:54.162555   27603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1878/cgroup
	W0904 19:49:54.173823   27603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1878/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 19:49:54.173873   27603 ssh_runner.go:195] Run: ls
	I0904 19:49:54.178370   27603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0904 19:49:54.184578   27603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0904 19:49:54.184603   27603 status.go:422] ha-316015 apiserver status = Running (err=<nil>)
	I0904 19:49:54.184615   27603 status.go:257] ha-316015 status: &{Name:ha-316015 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 19:49:54.184647   27603 status.go:255] checking status of ha-316015-m02 ...
	I0904 19:49:54.184997   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.185040   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.199787   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0904 19:49:54.200146   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.200612   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.200634   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.200965   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.201141   27603 main.go:141] libmachine: (ha-316015-m02) Calling .GetState
	I0904 19:49:54.202697   27603 status.go:330] ha-316015-m02 host status = "Stopped" (err=<nil>)
	I0904 19:49:54.202713   27603 status.go:343] host is not running, skipping remaining checks
	I0904 19:49:54.202721   27603 status.go:257] ha-316015-m02 status: &{Name:ha-316015-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 19:49:54.202742   27603 status.go:255] checking status of ha-316015-m03 ...
	I0904 19:49:54.203013   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.203049   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.217656   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0904 19:49:54.218016   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.218437   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.218457   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.218769   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.218944   27603 main.go:141] libmachine: (ha-316015-m03) Calling .GetState
	I0904 19:49:54.220586   27603 status.go:330] ha-316015-m03 host status = "Running" (err=<nil>)
	I0904 19:49:54.220600   27603 host.go:66] Checking if "ha-316015-m03" exists ...
	I0904 19:49:54.220952   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.220991   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.235783   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45113
	I0904 19:49:54.236210   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.236735   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.236753   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.237096   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.237328   27603 main.go:141] libmachine: (ha-316015-m03) Calling .GetIP
	I0904 19:49:54.240402   27603 main.go:141] libmachine: (ha-316015-m03) DBG | domain ha-316015-m03 has defined MAC address 52:54:00:24:0f:4a in network mk-ha-316015
	I0904 19:49:54.240903   27603 main.go:141] libmachine: (ha-316015-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:0f:4a", ip: ""} in network mk-ha-316015: {Iface:virbr1 ExpiryTime:2024-09-04 20:47:11 +0000 UTC Type:0 Mac:52:54:00:24:0f:4a Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-316015-m03 Clientid:01:52:54:00:24:0f:4a}
	I0904 19:49:54.240925   27603 main.go:141] libmachine: (ha-316015-m03) DBG | domain ha-316015-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:24:0f:4a in network mk-ha-316015
	I0904 19:49:54.241062   27603 host.go:66] Checking if "ha-316015-m03" exists ...
	I0904 19:49:54.241398   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.241439   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.255838   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42351
	I0904 19:49:54.256249   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.256673   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.256693   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.256968   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.257109   27603 main.go:141] libmachine: (ha-316015-m03) Calling .DriverName
	I0904 19:49:54.257277   27603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 19:49:54.257298   27603 main.go:141] libmachine: (ha-316015-m03) Calling .GetSSHHostname
	I0904 19:49:54.259910   27603 main.go:141] libmachine: (ha-316015-m03) DBG | domain ha-316015-m03 has defined MAC address 52:54:00:24:0f:4a in network mk-ha-316015
	I0904 19:49:54.260330   27603 main.go:141] libmachine: (ha-316015-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:0f:4a", ip: ""} in network mk-ha-316015: {Iface:virbr1 ExpiryTime:2024-09-04 20:47:11 +0000 UTC Type:0 Mac:52:54:00:24:0f:4a Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:ha-316015-m03 Clientid:01:52:54:00:24:0f:4a}
	I0904 19:49:54.260364   27603 main.go:141] libmachine: (ha-316015-m03) DBG | domain ha-316015-m03 has defined IP address 192.168.39.135 and MAC address 52:54:00:24:0f:4a in network mk-ha-316015
	I0904 19:49:54.260493   27603 main.go:141] libmachine: (ha-316015-m03) Calling .GetSSHPort
	I0904 19:49:54.260671   27603 main.go:141] libmachine: (ha-316015-m03) Calling .GetSSHKeyPath
	I0904 19:49:54.260821   27603 main.go:141] libmachine: (ha-316015-m03) Calling .GetSSHUsername
	I0904 19:49:54.260967   27603 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/ha-316015-m03/id_rsa Username:docker}
	I0904 19:49:54.346545   27603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 19:49:54.362341   27603 kubeconfig.go:125] found "ha-316015" server: "https://192.168.39.254:8443"
	I0904 19:49:54.362366   27603 api_server.go:166] Checking apiserver status ...
	I0904 19:49:54.362401   27603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 19:49:54.376218   27603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup
	W0904 19:49:54.386031   27603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 19:49:54.386085   27603 ssh_runner.go:195] Run: ls
	I0904 19:49:54.390791   27603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0904 19:49:54.394725   27603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0904 19:49:54.394745   27603 status.go:422] ha-316015-m03 apiserver status = Running (err=<nil>)
	I0904 19:49:54.394754   27603 status.go:257] ha-316015-m03 status: &{Name:ha-316015-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 19:49:54.394774   27603 status.go:255] checking status of ha-316015-m04 ...
	I0904 19:49:54.395094   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.395134   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.410421   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0904 19:49:54.410919   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.411427   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.411453   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.411772   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.411947   27603 main.go:141] libmachine: (ha-316015-m04) Calling .GetState
	I0904 19:49:54.413450   27603 status.go:330] ha-316015-m04 host status = "Running" (err=<nil>)
	I0904 19:49:54.413464   27603 host.go:66] Checking if "ha-316015-m04" exists ...
	I0904 19:49:54.413737   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.413768   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.428404   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44501
	I0904 19:49:54.428869   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.429319   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.429338   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.429685   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.429856   27603 main.go:141] libmachine: (ha-316015-m04) Calling .GetIP
	I0904 19:49:54.432362   27603 main.go:141] libmachine: (ha-316015-m04) DBG | domain ha-316015-m04 has defined MAC address 52:54:00:09:46:6a in network mk-ha-316015
	I0904 19:49:54.432779   27603 main.go:141] libmachine: (ha-316015-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:46:6a", ip: ""} in network mk-ha-316015: {Iface:virbr1 ExpiryTime:2024-09-04 20:48:40 +0000 UTC Type:0 Mac:52:54:00:09:46:6a Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-316015-m04 Clientid:01:52:54:00:09:46:6a}
	I0904 19:49:54.432802   27603 main.go:141] libmachine: (ha-316015-m04) DBG | domain ha-316015-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:09:46:6a in network mk-ha-316015
	I0904 19:49:54.432949   27603 host.go:66] Checking if "ha-316015-m04" exists ...
	I0904 19:49:54.433245   27603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:49:54.433275   27603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:49:54.447958   27603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I0904 19:49:54.448337   27603 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:49:54.448755   27603 main.go:141] libmachine: Using API Version  1
	I0904 19:49:54.448772   27603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:49:54.449085   27603 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:49:54.449308   27603 main.go:141] libmachine: (ha-316015-m04) Calling .DriverName
	I0904 19:49:54.449563   27603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 19:49:54.449585   27603 main.go:141] libmachine: (ha-316015-m04) Calling .GetSSHHostname
	I0904 19:49:54.452334   27603 main.go:141] libmachine: (ha-316015-m04) DBG | domain ha-316015-m04 has defined MAC address 52:54:00:09:46:6a in network mk-ha-316015
	I0904 19:49:54.452769   27603 main.go:141] libmachine: (ha-316015-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:46:6a", ip: ""} in network mk-ha-316015: {Iface:virbr1 ExpiryTime:2024-09-04 20:48:40 +0000 UTC Type:0 Mac:52:54:00:09:46:6a Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-316015-m04 Clientid:01:52:54:00:09:46:6a}
	I0904 19:49:54.452789   27603 main.go:141] libmachine: (ha-316015-m04) DBG | domain ha-316015-m04 has defined IP address 192.168.39.165 and MAC address 52:54:00:09:46:6a in network mk-ha-316015
	I0904 19:49:54.452931   27603 main.go:141] libmachine: (ha-316015-m04) Calling .GetSSHPort
	I0904 19:49:54.453089   27603 main.go:141] libmachine: (ha-316015-m04) Calling .GetSSHKeyPath
	I0904 19:49:54.453391   27603 main.go:141] libmachine: (ha-316015-m04) Calling .GetSSHUsername
	I0904 19:49:54.453546   27603 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/ha-316015-m04/id_rsa Username:docker}
	I0904 19:49:54.537151   27603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 19:49:54.552310   27603 status.go:257] ha-316015-m04 status: &{Name:ha-316015-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 node start m02 -v=7 --alsologtostderr
E0904 19:50:14.172935   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-316015 node start m02 -v=7 --alsologtostderr: (42.614701648s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (223.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-316015 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-316015 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-316015 -v=7 --alsologtostderr: (40.690338392s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-316015 --wait=true -v=7 --alsologtostderr
E0904 19:51:36.095287   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:53:52.232742   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:54:19.937199   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:54:20.214907   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-316015 --wait=true -v=7 --alsologtostderr: (3m2.488333438s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-316015
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (223.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-316015 node delete m03 -v=7 --alsologtostderr: (6.331405533s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-316015 stop -v=7 --alsologtostderr: (37.511224758s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr: exit status 7 (101.912842ms)

                                                
                                                
-- stdout --
	ha-316015
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-316015-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-316015-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 19:55:07.169205   29886 out.go:345] Setting OutFile to fd 1 ...
	I0904 19:55:07.169312   29886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:55:07.169319   29886 out.go:358] Setting ErrFile to fd 2...
	I0904 19:55:07.169324   29886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 19:55:07.169555   29886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 19:55:07.169732   29886 out.go:352] Setting JSON to false
	I0904 19:55:07.169757   29886 mustload.go:65] Loading cluster: ha-316015
	I0904 19:55:07.169926   29886 notify.go:220] Checking for updates...
	I0904 19:55:07.170123   29886 config.go:182] Loaded profile config "ha-316015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 19:55:07.170138   29886 status.go:255] checking status of ha-316015 ...
	I0904 19:55:07.170525   29886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:55:07.170586   29886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:55:07.190529   29886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34625
	I0904 19:55:07.190963   29886 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:55:07.191571   29886 main.go:141] libmachine: Using API Version  1
	I0904 19:55:07.191598   29886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:55:07.191985   29886 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:55:07.192170   29886 main.go:141] libmachine: (ha-316015) Calling .GetState
	I0904 19:55:07.193975   29886 status.go:330] ha-316015 host status = "Stopped" (err=<nil>)
	I0904 19:55:07.193995   29886 status.go:343] host is not running, skipping remaining checks
	I0904 19:55:07.194004   29886 status.go:257] ha-316015 status: &{Name:ha-316015 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 19:55:07.194034   29886 status.go:255] checking status of ha-316015-m02 ...
	I0904 19:55:07.194316   29886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:55:07.194361   29886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:55:07.208837   29886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0904 19:55:07.209217   29886 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:55:07.209716   29886 main.go:141] libmachine: Using API Version  1
	I0904 19:55:07.209753   29886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:55:07.210057   29886 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:55:07.210235   29886 main.go:141] libmachine: (ha-316015-m02) Calling .GetState
	I0904 19:55:07.211983   29886 status.go:330] ha-316015-m02 host status = "Stopped" (err=<nil>)
	I0904 19:55:07.211996   29886 status.go:343] host is not running, skipping remaining checks
	I0904 19:55:07.212002   29886 status.go:257] ha-316015-m02 status: &{Name:ha-316015-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 19:55:07.212027   29886 status.go:255] checking status of ha-316015-m04 ...
	I0904 19:55:07.212309   29886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 19:55:07.212341   29886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 19:55:07.227103   29886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0904 19:55:07.227526   29886 main.go:141] libmachine: () Calling .GetVersion
	I0904 19:55:07.227997   29886 main.go:141] libmachine: Using API Version  1
	I0904 19:55:07.228020   29886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 19:55:07.228404   29886 main.go:141] libmachine: () Calling .GetMachineName
	I0904 19:55:07.228614   29886 main.go:141] libmachine: (ha-316015-m04) Calling .GetState
	I0904 19:55:07.230528   29886 status.go:330] ha-316015-m04 host status = "Stopped" (err=<nil>)
	I0904 19:55:07.230550   29886 status.go:343] host is not running, skipping remaining checks
	I0904 19:55:07.230558   29886 status.go:257] ha-316015-m04 status: &{Name:ha-316015-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (121.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-316015 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-316015 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m1.029105496s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (121.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (188.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-316015 --control-plane -v=7 --alsologtostderr
E0904 19:58:52.233531   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 19:59:20.215605   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-316015 --control-plane -v=7 --alsologtostderr: (3m7.278050262s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-316015 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (188.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-536016 --driver=kvm2 
E0904 20:00:43.279258   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-536016 --driver=kvm2 : (49.06244655s)
--- PASS: TestImageBuild/serial/Setup (49.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-536016
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-536016: (2.078937799s)
--- PASS: TestImageBuild/serial/NormalBuild (2.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-536016
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-536016: (1.252044494s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-536016
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-536016: (1.022223329s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-536016
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-510485 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-510485 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m26.897005488s)
--- PASS: TestJSONOutput/start/Command (86.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-510485 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-510485 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.49s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-510485 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-510485 --output=json --user=testUser: (7.488238462s)
--- PASS: TestJSONOutput/stop/Command (7.49s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-424630 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-424630 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.219242ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"991472f5-73c7-457a-a5d6-f530ffbc05e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-424630] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bfb0ce7-5143-4138-ba82-73e7a9f968da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19575"}}
	{"specversion":"1.0","id":"e6163722-0c61-4e21-9e43-7126b822c644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"32db4c9a-7ee4-4205-8aa0-00117e076251","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig"}}
	{"specversion":"1.0","id":"cff310f6-761c-44c4-a2a1-ab8a04754ac8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube"}}
	{"specversion":"1.0","id":"7793aab5-8343-449a-8929-8baa9de85fc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"78f7a3c1-020e-4c77-bf3e-fde7277f716c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b482274-96f2-4402-b503-b88fa8d2e5e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-424630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-424630
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (102.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-814916 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-814916 --driver=kvm2 : (49.93153779s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-817568 --driver=kvm2 
E0904 20:03:52.233032   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:04:20.215151   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-817568 --driver=kvm2 : (49.480741093s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-814916
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-817568
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-817568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-817568
helpers_test.go:175: Cleaning up "first-814916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-814916
--- PASS: TestMinikubeProfile (102.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-691010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-691010 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.76940014s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-691010 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-691010 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-705264 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0904 20:05:15.299449   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-705264 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.501173092s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-705264 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-705264 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-691010 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-705264 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-705264 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-705264
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-705264: (3.277889226s)
--- PASS: TestMountStart/serial/Stop (3.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-705264
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-705264: (25.663642715s)
--- PASS: TestMountStart/serial/RestartStopped (26.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-705264 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-705264 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-710757 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-710757 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m6.377589241s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-710757 -- rollout status deployment/busybox: (2.625400809s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-8z6sv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-nljtw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-8z6sv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-nljtw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-8z6sv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-nljtw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-8z6sv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-8z6sv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-nljtw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-710757 -- exec busybox-7dff88458-nljtw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-710757 -v 3 --alsologtostderr
E0904 20:08:52.232592   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-710757 -v 3 --alsologtostderr: (56.26539425s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-710757 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp testdata/cp-test.txt multinode-710757:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1388672267/001/cp-test_multinode-710757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757:/home/docker/cp-test.txt multinode-710757-m02:/home/docker/cp-test_multinode-710757_multinode-710757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m02 "sudo cat /home/docker/cp-test_multinode-710757_multinode-710757-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757:/home/docker/cp-test.txt multinode-710757-m03:/home/docker/cp-test_multinode-710757_multinode-710757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m03 "sudo cat /home/docker/cp-test_multinode-710757_multinode-710757-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp testdata/cp-test.txt multinode-710757-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1388672267/001/cp-test_multinode-710757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757-m02:/home/docker/cp-test.txt multinode-710757:/home/docker/cp-test_multinode-710757-m02_multinode-710757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757 "sudo cat /home/docker/cp-test_multinode-710757-m02_multinode-710757.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757-m02:/home/docker/cp-test.txt multinode-710757-m03:/home/docker/cp-test_multinode-710757-m02_multinode-710757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m03 "sudo cat /home/docker/cp-test_multinode-710757-m02_multinode-710757-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp testdata/cp-test.txt multinode-710757-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1388672267/001/cp-test_multinode-710757-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757-m03:/home/docker/cp-test.txt multinode-710757:/home/docker/cp-test_multinode-710757-m03_multinode-710757.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757 "sudo cat /home/docker/cp-test_multinode-710757-m03_multinode-710757.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 cp multinode-710757-m03:/home/docker/cp-test.txt multinode-710757-m02:/home/docker/cp-test_multinode-710757-m03_multinode-710757-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 ssh -n multinode-710757-m02 "sudo cat /home/docker/cp-test_multinode-710757-m03_multinode-710757-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 node stop m03
E0904 20:09:20.214826   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-710757 node stop m03: (2.50989806s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-710757 status: exit status 7 (419.559662ms)

                                                
                                                
-- stdout --
	multinode-710757
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-710757-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-710757-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr: exit status 7 (415.879383ms)

                                                
                                                
-- stdout --
	multinode-710757
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-710757-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-710757-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 20:09:21.640012   38600 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:09:21.640267   38600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:09:21.640276   38600 out.go:358] Setting ErrFile to fd 2...
	I0904 20:09:21.640280   38600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:09:21.640457   38600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 20:09:21.640610   38600 out.go:352] Setting JSON to false
	I0904 20:09:21.640634   38600 mustload.go:65] Loading cluster: multinode-710757
	I0904 20:09:21.640686   38600 notify.go:220] Checking for updates...
	I0904 20:09:21.640983   38600 config.go:182] Loaded profile config "multinode-710757": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 20:09:21.640996   38600 status.go:255] checking status of multinode-710757 ...
	I0904 20:09:21.641416   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.641485   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:21.659897   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0904 20:09:21.660383   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:21.661005   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:21.661033   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:21.661552   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:21.661776   38600 main.go:141] libmachine: (multinode-710757) Calling .GetState
	I0904 20:09:21.663444   38600 status.go:330] multinode-710757 host status = "Running" (err=<nil>)
	I0904 20:09:21.663470   38600 host.go:66] Checking if "multinode-710757" exists ...
	I0904 20:09:21.663773   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.663808   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:21.678995   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45025
	I0904 20:09:21.679356   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:21.679790   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:21.679821   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:21.680227   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:21.680431   38600 main.go:141] libmachine: (multinode-710757) Calling .GetIP
	I0904 20:09:21.683381   38600 main.go:141] libmachine: (multinode-710757) DBG | domain multinode-710757 has defined MAC address 52:54:00:e4:25:be in network mk-multinode-710757
	I0904 20:09:21.683810   38600 main.go:141] libmachine: (multinode-710757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:25:be", ip: ""} in network mk-multinode-710757: {Iface:virbr1 ExpiryTime:2024-09-04 21:06:16 +0000 UTC Type:0 Mac:52:54:00:e4:25:be Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-710757 Clientid:01:52:54:00:e4:25:be}
	I0904 20:09:21.683838   38600 main.go:141] libmachine: (multinode-710757) DBG | domain multinode-710757 has defined IP address 192.168.39.225 and MAC address 52:54:00:e4:25:be in network mk-multinode-710757
	I0904 20:09:21.683938   38600 host.go:66] Checking if "multinode-710757" exists ...
	I0904 20:09:21.684241   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.684279   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:21.699873   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I0904 20:09:21.700517   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:21.701095   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:21.701124   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:21.701451   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:21.701630   38600 main.go:141] libmachine: (multinode-710757) Calling .DriverName
	I0904 20:09:21.701798   38600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:09:21.701823   38600 main.go:141] libmachine: (multinode-710757) Calling .GetSSHHostname
	I0904 20:09:21.704804   38600 main.go:141] libmachine: (multinode-710757) DBG | domain multinode-710757 has defined MAC address 52:54:00:e4:25:be in network mk-multinode-710757
	I0904 20:09:21.705158   38600 main.go:141] libmachine: (multinode-710757) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:25:be", ip: ""} in network mk-multinode-710757: {Iface:virbr1 ExpiryTime:2024-09-04 21:06:16 +0000 UTC Type:0 Mac:52:54:00:e4:25:be Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-710757 Clientid:01:52:54:00:e4:25:be}
	I0904 20:09:21.705193   38600 main.go:141] libmachine: (multinode-710757) DBG | domain multinode-710757 has defined IP address 192.168.39.225 and MAC address 52:54:00:e4:25:be in network mk-multinode-710757
	I0904 20:09:21.705317   38600 main.go:141] libmachine: (multinode-710757) Calling .GetSSHPort
	I0904 20:09:21.705474   38600 main.go:141] libmachine: (multinode-710757) Calling .GetSSHKeyPath
	I0904 20:09:21.705627   38600 main.go:141] libmachine: (multinode-710757) Calling .GetSSHUsername
	I0904 20:09:21.705754   38600 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/multinode-710757/id_rsa Username:docker}
	I0904 20:09:21.793049   38600 ssh_runner.go:195] Run: systemctl --version
	I0904 20:09:21.799196   38600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:09:21.813554   38600 kubeconfig.go:125] found "multinode-710757" server: "https://192.168.39.225:8443"
	I0904 20:09:21.813589   38600 api_server.go:166] Checking apiserver status ...
	I0904 20:09:21.813627   38600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0904 20:09:21.827113   38600 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup
	W0904 20:09:21.836332   38600 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1846/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0904 20:09:21.836380   38600 ssh_runner.go:195] Run: ls
	I0904 20:09:21.840617   38600 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0904 20:09:21.844520   38600 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0904 20:09:21.844542   38600 status.go:422] multinode-710757 apiserver status = Running (err=<nil>)
	I0904 20:09:21.844553   38600 status.go:257] multinode-710757 status: &{Name:multinode-710757 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 20:09:21.844567   38600 status.go:255] checking status of multinode-710757-m02 ...
	I0904 20:09:21.844850   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.844882   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:21.860412   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45377
	I0904 20:09:21.860829   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:21.861311   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:21.861338   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:21.861651   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:21.861811   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .GetState
	I0904 20:09:21.863089   38600 status.go:330] multinode-710757-m02 host status = "Running" (err=<nil>)
	I0904 20:09:21.863106   38600 host.go:66] Checking if "multinode-710757-m02" exists ...
	I0904 20:09:21.863403   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.863443   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:21.878842   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
	I0904 20:09:21.879269   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:21.879709   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:21.879732   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:21.880036   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:21.880234   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .GetIP
	I0904 20:09:21.882881   38600 main.go:141] libmachine: (multinode-710757-m02) DBG | domain multinode-710757-m02 has defined MAC address 52:54:00:72:ce:33 in network mk-multinode-710757
	I0904 20:09:21.883225   38600 main.go:141] libmachine: (multinode-710757-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:ce:33", ip: ""} in network mk-multinode-710757: {Iface:virbr1 ExpiryTime:2024-09-04 21:07:29 +0000 UTC Type:0 Mac:52:54:00:72:ce:33 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-710757-m02 Clientid:01:52:54:00:72:ce:33}
	I0904 20:09:21.883254   38600 main.go:141] libmachine: (multinode-710757-m02) DBG | domain multinode-710757-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:72:ce:33 in network mk-multinode-710757
	I0904 20:09:21.883370   38600 host.go:66] Checking if "multinode-710757-m02" exists ...
	I0904 20:09:21.883661   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.883691   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:21.898884   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43257
	I0904 20:09:21.899329   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:21.899803   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:21.899820   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:21.900140   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:21.900341   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .DriverName
	I0904 20:09:21.900638   38600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0904 20:09:21.900661   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .GetSSHHostname
	I0904 20:09:21.903440   38600 main.go:141] libmachine: (multinode-710757-m02) DBG | domain multinode-710757-m02 has defined MAC address 52:54:00:72:ce:33 in network mk-multinode-710757
	I0904 20:09:21.903911   38600 main.go:141] libmachine: (multinode-710757-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:ce:33", ip: ""} in network mk-multinode-710757: {Iface:virbr1 ExpiryTime:2024-09-04 21:07:29 +0000 UTC Type:0 Mac:52:54:00:72:ce:33 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:multinode-710757-m02 Clientid:01:52:54:00:72:ce:33}
	I0904 20:09:21.903943   38600 main.go:141] libmachine: (multinode-710757-m02) DBG | domain multinode-710757-m02 has defined IP address 192.168.39.207 and MAC address 52:54:00:72:ce:33 in network mk-multinode-710757
	I0904 20:09:21.904109   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .GetSSHPort
	I0904 20:09:21.904293   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .GetSSHKeyPath
	I0904 20:09:21.904439   38600 main.go:141] libmachine: (multinode-710757-m02) Calling .GetSSHUsername
	I0904 20:09:21.904554   38600 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19575-5257/.minikube/machines/multinode-710757-m02/id_rsa Username:docker}
	I0904 20:09:21.980792   38600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0904 20:09:21.994293   38600 status.go:257] multinode-710757-m02 status: &{Name:multinode-710757-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0904 20:09:21.994331   38600 status.go:255] checking status of multinode-710757-m03 ...
	I0904 20:09:21.994649   38600 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:09:21.994700   38600 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:09:22.010418   38600 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39789
	I0904 20:09:22.010854   38600 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:09:22.011315   38600 main.go:141] libmachine: Using API Version  1
	I0904 20:09:22.011340   38600 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:09:22.011676   38600 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:09:22.011841   38600 main.go:141] libmachine: (multinode-710757-m03) Calling .GetState
	I0904 20:09:22.013317   38600 status.go:330] multinode-710757-m03 host status = "Stopped" (err=<nil>)
	I0904 20:09:22.013332   38600 status.go:343] host is not running, skipping remaining checks
	I0904 20:09:22.013338   38600 status.go:257] multinode-710757-m03 status: &{Name:multinode-710757-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-710757 node start m03 -v=7 --alsologtostderr: (41.416269181s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-710757
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-710757
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-710757: (27.196258847s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-710757 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-710757 --wait=true -v=8 --alsologtostderr: (2m44.398681477s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-710757
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-710757 node delete m03: (1.735981393s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-710757 stop: (24.914333027s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-710757 status: exit status 7 (81.919386ms)

                                                
                                                
-- stdout --
	multinode-710757
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-710757-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr: exit status 7 (83.088756ms)

                                                
                                                
-- stdout --
	multinode-710757
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-710757-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 20:13:43.035231   40381 out.go:345] Setting OutFile to fd 1 ...
	I0904 20:13:43.035361   40381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:13:43.035371   40381 out.go:358] Setting ErrFile to fd 2...
	I0904 20:13:43.035377   40381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0904 20:13:43.035546   40381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19575-5257/.minikube/bin
	I0904 20:13:43.035738   40381 out.go:352] Setting JSON to false
	I0904 20:13:43.035767   40381 mustload.go:65] Loading cluster: multinode-710757
	I0904 20:13:43.035884   40381 notify.go:220] Checking for updates...
	I0904 20:13:43.036168   40381 config.go:182] Loaded profile config "multinode-710757": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0904 20:13:43.036183   40381 status.go:255] checking status of multinode-710757 ...
	I0904 20:13:43.036559   40381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:13:43.036636   40381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:13:43.056069   40381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I0904 20:13:43.056512   40381 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:13:43.057068   40381 main.go:141] libmachine: Using API Version  1
	I0904 20:13:43.057099   40381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:13:43.057432   40381 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:13:43.057616   40381 main.go:141] libmachine: (multinode-710757) Calling .GetState
	I0904 20:13:43.059141   40381 status.go:330] multinode-710757 host status = "Stopped" (err=<nil>)
	I0904 20:13:43.059155   40381 status.go:343] host is not running, skipping remaining checks
	I0904 20:13:43.059161   40381 status.go:257] multinode-710757 status: &{Name:multinode-710757 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0904 20:13:43.059211   40381 status.go:255] checking status of multinode-710757-m02 ...
	I0904 20:13:43.059489   40381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0904 20:13:43.059528   40381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0904 20:13:43.074168   40381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0904 20:13:43.074618   40381 main.go:141] libmachine: () Calling .GetVersion
	I0904 20:13:43.075057   40381 main.go:141] libmachine: Using API Version  1
	I0904 20:13:43.075085   40381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0904 20:13:43.075390   40381 main.go:141] libmachine: () Calling .GetMachineName
	I0904 20:13:43.075567   40381 main.go:141] libmachine: (multinode-710757-m02) Calling .GetState
	I0904 20:13:43.076863   40381 status.go:330] multinode-710757-m02 host status = "Stopped" (err=<nil>)
	I0904 20:13:43.076879   40381 status.go:343] host is not running, skipping remaining checks
	I0904 20:13:43.076886   40381 status.go:257] multinode-710757-m02 status: &{Name:multinode-710757-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (117.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-710757 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0904 20:13:52.232587   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:14:20.214567   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-710757 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m56.602125672s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-710757 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (117.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-710757
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-710757-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-710757-m02 --driver=kvm2 : exit status 14 (62.505546ms)

                                                
                                                
-- stdout --
	* [multinode-710757-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-710757-m02' is duplicated with machine name 'multinode-710757-m02' in profile 'multinode-710757'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-710757-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-710757-m03 --driver=kvm2 : (48.419248768s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-710757
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-710757: exit status 80 (205.980264ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-710757 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-710757-m03 already exists in multinode-710757-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-710757-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.50s)

                                                
                                    
x
+
TestPreload (190.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-353653 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0904 20:17:23.280886   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-353653 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m1.605526985s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-353653 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-353653 image pull gcr.io/k8s-minikube/busybox: (1.554862838s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-353653
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-353653: (12.482105126s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-353653 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0904 20:18:52.232628   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:19:20.214596   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-353653 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (53.886303174s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-353653 image list
helpers_test.go:175: Cleaning up "test-preload-353653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-353653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-353653: (1.070047393s)
--- PASS: TestPreload (190.80s)

                                                
                                    
x
+
TestScheduledStopUnix (122.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-529006 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-529006 --memory=2048 --driver=kvm2 : (50.471243295s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529006 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-529006 -n scheduled-stop-529006
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529006 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529006 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-529006 -n scheduled-stop-529006
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-529006
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529006 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-529006
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-529006: exit status 7 (64.495554ms)

                                                
                                                
-- stdout --
	scheduled-stop-529006
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-529006 -n scheduled-stop-529006
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-529006 -n scheduled-stop-529006: exit status 7 (63.598738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-529006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-529006
--- PASS: TestScheduledStopUnix (122.02s)

                                                
                                    
x
+
TestSkaffold (129.43s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1037481153 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-117538 --memory=2600 --driver=kvm2 
E0904 20:21:55.300930   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-117538 --memory=2600 --driver=kvm2 : (50.772224531s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1037481153 run --minikube-profile skaffold-117538 --kube-context skaffold-117538 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1037481153 run --minikube-profile skaffold-117538 --kube-context skaffold-117538 --status-check=true --port-forward=false --interactive=false: (1m5.691546024s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-698f8dc56b-w4ckl" [f9eb16d7-36e5-413e-ba32-66dd2ca1a7f6] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003717838s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-77df7669f8-xcsgv" [01094cc9-1e28-4793-983e-9160f557b15e] Running
E0904 20:23:52.233159   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003975851s
helpers_test.go:175: Cleaning up "skaffold-117538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-117538
--- PASS: TestSkaffold (129.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2471738571 start -p running-upgrade-176591 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2471738571 start -p running-upgrade-176591 --memory=2200 --vm-driver=kvm2 : (2m19.178360552s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-176591 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-176591 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m25.434590464s)
helpers_test.go:175: Cleaning up "running-upgrade-176591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-176591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-176591: (1.206688281s)
--- PASS: TestRunningBinaryUpgrade (226.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (272.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (53.482235221s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-612488
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-612488: (1m50.536197952s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-612488 status --format={{.Host}}
E0904 20:28:42.165146   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-612488 status --format={{.Host}}: exit status 7 (76.653947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
E0904 20:28:42.807293   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:44.089279   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:28:46.651680   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (1m5.137462608s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-612488 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (97.122447ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-612488] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-612488
	    minikube start -p kubernetes-upgrade-612488 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6124882 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-612488 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-612488 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (41.82944896s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-612488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-612488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-612488: (1.171246374s)
--- PASS: TestKubernetesUpgrade (272.41s)

                                                
                                    
x
+
TestPause/serial/Start (86.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-284960 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-284960 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m26.94371759s)
--- PASS: TestPause/serial/Start (86.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665811 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-665811 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (72.916651ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-665811] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19575
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19575-5257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19575-5257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (105.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665811 --driver=kvm2 
E0904 20:24:20.214263   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665811 --driver=kvm2 : (1m45.611551598s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-665811 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (105.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-284960 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-284960 --alsologtostderr -v=1 --driver=kvm2 : (1m4.361926405s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (64.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665811 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665811 --no-kubernetes --driver=kvm2 : (18.865651082s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-665811 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-665811 status -o json: exit status 2 (252.324761ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-665811","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-665811
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-665811: (1.127685685s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665811 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665811 --no-kubernetes --driver=kvm2 : (48.725045446s)
--- PASS: TestNoKubernetes/serial/Start (48.73s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-284960 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-284960 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-284960 --output=json --layout=cluster: exit status 2 (347.500611ms)

                                                
                                                
-- stdout --
	{"Name":"pause-284960","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-284960","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-284960 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-284960 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-284960 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-284960 --alsologtostderr -v=5: (1.126213222s)
--- PASS: TestPause/serial/DeletePaused (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-665811 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-665811 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.752787ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (22.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.881891139s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.539276027s)
--- PASS: TestNoKubernetes/serial/ProfileList (22.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-665811
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-665811: (2.325896758s)
--- PASS: TestNoKubernetes/serial/Stop (2.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665811 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665811 --driver=kvm2 : (30.06018917s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-665811 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-665811 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.262393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (187.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2508083419 start -p stopped-upgrade-771086 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2508083419 start -p stopped-upgrade-771086 --memory=2200 --vm-driver=kvm2 : (1m39.13530439s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2508083419 -p stopped-upgrade-771086 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2508083419 -p stopped-upgrade-771086 stop: (13.167488527s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-771086 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-771086 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m15.445622987s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (187.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-771086
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-771086: (1.010977408s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (118.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0904 20:31:25.380864   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m58.707999335s)
--- PASS: TestNetworkPlugins/group/auto/Start (118.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m41.797953566s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (109.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m49.845797736s)
--- PASS: TestNetworkPlugins/group/calico/Start (109.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vscft" [c32f8009-a426-40cb-9f82-3d141cca6665] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vscft" [c32f8009-a426-40cb-9f82-3d141cca6665] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004022914s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bzxw9" [95a60757-ef2c-484e-ba38-01ba30c8ff08] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006038666s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wqpcn" [0a1cb76c-01dc-494a-b4ce-81604619198b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wqpcn" [0a1cb76c-01dc-494a-b4ce-81604619198b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.005174084s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m16.051044499s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-97s77" [b73ddb13-56c5-40fc-ae4b-96eb0f85591b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005062216s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (71.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0904 20:33:52.232937   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m11.145920775s)
--- PASS: TestNetworkPlugins/group/false/Start (71.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4p2ds" [0da5381a-ffd0-4625-ad4a-b5930348651b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 20:33:54.282676   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.289107   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.300596   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.322117   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.363613   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.445112   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.606613   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:54.928481   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:55.570586   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:33:56.852282   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4p2ds" [0da5381a-ffd0-4625-ad4a-b5930348651b] Running
E0904 20:34:03.282228   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:34:04.535725   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003970916s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m33.173639958s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (106.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E0904 20:34:35.258817   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m46.873331288s)
--- PASS: TestNetworkPlugins/group/flannel/Start (106.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x6p7v" [c7b6a621-c986-491b-9866-6ec9c7640faf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x6p7v" [c7b6a621-c986-491b-9866-6ec9c7640faf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.032835349s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2wlgk" [320f7ea2-f479-4399-8fbc-f47dab7d4894] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2wlgk" [320f7ea2-f479-4399-8fbc-f47dab7d4894] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.007583324s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (107.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0904 20:35:16.220910   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m47.952760838s)
--- PASS: TestNetworkPlugins/group/bridge/Start (107.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (86.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-149381 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m26.877313136s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (86.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pnm75" [88ad0231-b1d1-48ea-8209-7bbd00141d36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pnm75" [88ad0231-b1d1-48ea-8209-7bbd00141d36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003427564s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (198.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-870162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-870162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m18.588348598s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (198.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m9xtt" [3d60dd92-aac3-4e25-ad3a-7ac5e600ada1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004718991s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8c5gt" [b32883de-1c78-4afe-a5d7-6df5c8d0f06a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8c5gt" [b32883de-1c78-4afe-a5d7-6df5c8d0f06a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004706623s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-220207 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-220207 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (1m56.029354878s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vnhrj" [0ba0a56e-0d5f-4a54-9d73-49810de7e8ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vnhrj" [0ba0a56e-0d5f-4a54-9d73-49810de7e8ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.0036923s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-149381 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-149381 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zpr8z" [b8c1def1-1aae-4b0c-ac82-4a2a5a836ddc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zpr8z" [b8c1def1-1aae-4b0c-ac82-4a2a5a836ddc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004260367s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (16.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-149381 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-149381 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155390202s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context kubenet-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (16.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-149381 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-149381 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)
E0904 20:45:00.280688   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:45:04.585108   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:45:12.361581   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (204.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-380781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-380781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (3m24.770500712s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (204.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-533399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0904 20:37:59.730363   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:37:59.736864   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:37:59.748352   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:37:59.769846   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:37:59.811291   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:37:59.892745   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:00.054475   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:00.376510   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:01.018629   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:02.299966   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:04.862156   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:09.983974   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:10.869823   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:10.876223   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:10.887644   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:10.909140   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:10.950561   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:11.032044   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:11.193602   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:11.515180   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:12.156720   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:13.438762   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:16.000899   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:20.225653   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:21.123236   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:31.364832   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:35.302831   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-533399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (1m12.844941622s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-220207 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [742f9bf0-cc37-42b5-ba87-196b88bff2c7] Pending
E0904 20:38:40.707875   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [742f9bf0-cc37-42b5-ba87-196b88bff2c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0904 20:38:41.516332   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [742f9bf0-cc37-42b5-ba87-196b88bff2c7] Running
E0904 20:38:47.772606   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:47.779104   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:47.790562   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:47.812165   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:47.853935   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:47.935840   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:48.097658   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:48.419489   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:49.061461   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003910191s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-220207 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-220207 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0904 20:38:50.343764   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-220207 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-220207 --alsologtostderr -v=3
E0904 20:38:51.846479   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:52.232337   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:52.905487   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:54.283373   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:38:58.027638   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-220207 --alsologtostderr -v=3: (12.695931679s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-533399 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af40e631-7013-4286-85a2-4b45a7369710] Pending
helpers_test.go:344: "busybox" [af40e631-7013-4286-85a2-4b45a7369710] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af40e631-7013-4286-85a2-4b45a7369710] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005922977s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-533399 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220207 -n no-preload-220207
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220207 -n no-preload-220207: exit status 7 (63.966538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-220207 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-220207 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-220207 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (5m0.291031694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220207 -n no-preload-220207
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-533399 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-533399 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-533399 --alsologtostderr -v=3
E0904 20:39:08.269906   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-533399 --alsologtostderr -v=3: (12.593493543s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-870162 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b986d8e-b1ca-4432-8e69-87c8439df751] Pending
E0904 20:39:20.214106   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2b986d8e-b1ca-4432-8e69-87c8439df751] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0904 20:39:21.669264   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:21.985481   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2b986d8e-b1ca-4432-8e69-87c8439df751] Running
E0904 20:39:28.751755   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003783309s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-870162 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399: exit status 7 (60.875891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-533399 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (318s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-533399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-533399 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (5m17.718700499s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (318.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-870162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-870162 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-870162 --alsologtostderr -v=3
E0904 20:39:32.807942   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-870162 --alsologtostderr -v=3: (13.346654959s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-870162 -n old-k8s-version-870162
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-870162 -n old-k8s-version-870162: exit status 7 (64.590531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-870162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (407.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-870162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0904 20:39:44.657087   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:44.663494   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:44.674833   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:44.696200   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:44.737581   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:44.819033   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:44.981077   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:45.302362   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:45.944878   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:47.226669   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:49.788932   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:39:54.911306   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.280886   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.287486   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.298898   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.320302   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.361745   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.443475   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.604718   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:00.926569   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:01.568205   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:02.850492   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:05.153300   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:05.412658   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:09.713650   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:10.534235   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:20.776648   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:25.635492   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:33.765456   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:33.771852   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:33.783239   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:33.804703   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:33.846137   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:33.927361   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:34.088975   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:34.410836   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:35.052235   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:36.334269   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:38.895958   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:41.258538   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:43.591354   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:44.017971   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:54.259666   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:40:54.729248   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-870162 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m47.390075681s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-870162 -n old-k8s-version-870162
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (407.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-380781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ccde5d2-ce1f-440a-a56c-5e230252b110] Pending
helpers_test.go:344: "busybox" [2ccde5d2-ce1f-440a-a56c-5e230252b110] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ccde5d2-ce1f-440a-a56c-5e230252b110] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004141792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-380781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-380781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-380781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-380781 --alsologtostderr -v=3
E0904 20:41:06.597515   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.062032   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.068442   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.079830   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.101236   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.142668   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.224284   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.385564   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:10.707564   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:11.349169   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:12.631410   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:14.741506   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:15.193226   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-380781 --alsologtostderr -v=3: (13.327398136s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-380781 -n embed-certs-380781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-380781 -n embed-certs-380781: exit status 7 (64.237011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-380781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-380781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
E0904 20:41:20.315175   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:22.220914   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:30.556747   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:31.635613   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:51.039064   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:55.703791   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:57.803689   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:57.810086   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:57.821525   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:57.842937   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:57.884304   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:57.965742   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:58.127285   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:58.449243   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:41:59.091452   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:00.373678   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:02.935229   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.043834   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.050283   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.061800   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.083344   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.124793   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.206320   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.368028   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:03.689731   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:04.331959   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:05.613968   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:08.056628   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:08.176224   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:13.298515   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:18.298646   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:23.539846   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:28.519379   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:32.000771   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:38.780531   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:44.021755   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:44.143278   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:42:59.729489   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:10.870225   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:17.626286   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:19.742083   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:24.983517   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:27.432870   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/auto-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:38.571198   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kindnet-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:41.516743   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/skaffold-117538/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:47.772704   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:52.233298   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/functional-566210/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:53.923059   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:43:54.283467   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/gvisor-769621/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-380781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (4m59.353692657s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-380781 -n embed-certs-380781
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dl4sz" [2fce8455-cb53-4d26-9cf2-8760b953d1a7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004327322s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dl4sz" [2fce8455-cb53-4d26-9cf2-8760b953d1a7] Running
E0904 20:44:15.477602   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/calico-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005312912s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-220207 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-220207 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-220207 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220207 -n no-preload-220207
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220207 -n no-preload-220207: exit status 2 (257.263782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-220207 -n no-preload-220207
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-220207 -n no-preload-220207: exit status 2 (249.197862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-220207 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220207 -n no-preload-220207
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-220207 -n no-preload-220207
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-770973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0904 20:44:20.214936   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/addons-586464/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-770973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (1m1.088631443s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ppmb6" [31e50d50-5740-422a-a021-3ff5d2538932] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0904 20:44:41.664370   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/kubenet-149381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ppmb6" [31e50d50-5740-422a-a021-3ff5d2538932] Running
E0904 20:44:44.657978   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/custom-flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005595987s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ppmb6" [31e50d50-5740-422a-a021-3ff5d2538932] Running
E0904 20:44:46.905672   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/bridge-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005724453s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-533399 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-533399 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-533399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399: exit status 2 (261.972793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399: exit status 2 (265.984628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-533399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-533399 --alsologtostderr -v=1: (1.629368189s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-533399 -n default-k8s-diff-port-533399
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-770973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-770973 --alsologtostderr -v=3
E0904 20:45:27.984595   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/false-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-770973 --alsologtostderr -v=3: (7.515109244s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-770973 -n newest-cni-770973
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-770973 -n newest-cni-770973: exit status 7 (61.447343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-770973 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-770973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0904 20:45:33.765437   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
E0904 20:46:01.468674   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/enable-default-cni-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-770973 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (37.098523795s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-770973 -n newest-cni-770973
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-770973 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-770973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-770973 -n newest-cni-770973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-770973 -n newest-cni-770973: exit status 2 (239.44046ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-770973 -n newest-cni-770973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-770973 -n newest-cni-770973: exit status 2 (250.569581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-770973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-770973 -n newest-cni-770973
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-770973 -n newest-cni-770973
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nwlhj" [79053394-74d8-4ab8-8062-4c4282174bc9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003179726s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nwlhj" [79053394-74d8-4ab8-8062-4c4282174bc9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004518456s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-380781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-380781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-380781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-380781 -n embed-certs-380781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-380781 -n embed-certs-380781: exit status 2 (245.54357ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-380781 -n embed-certs-380781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-380781 -n embed-certs-380781: exit status 2 (253.011651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-380781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-380781 -n embed-certs-380781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-380781 -n embed-certs-380781
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kp8qj" [44d5a7f0-5e61-47ed-8e65-532580c8b37d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003951553s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kp8qj" [44d5a7f0-5e61-47ed-8e65-532580c8b37d] Running
E0904 20:46:37.764984   12431 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19575-5257/.minikube/profiles/flannel-149381/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003606792s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-870162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-870162 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-870162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-870162 -n old-k8s-version-870162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-870162 -n old-k8s-version-870162: exit status 2 (239.257748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-870162 -n old-k8s-version-870162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-870162 -n old-k8s-version-870162: exit status 2 (235.829423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-870162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-870162 -n old-k8s-version-870162
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-870162 -n old-k8s-version-870162
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.25s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-149381 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-149381" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-149381

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-149381" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-149381"

                                                
                                                
----------------------- debugLogs end: cilium-149381 [took: 3.178054407s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-149381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-149381
--- SKIP: TestNetworkPlugins/group/cilium (3.35s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-496499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-496499
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard