Test Report: KVM_Linux 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (2/341)

Order failed test Duration
33 TestAddons/parallel/Registry 73.53
111 TestFunctional/parallel/License 0.11
x
+
TestAddons/parallel/Registry (73.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.919133ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-csqdb" [110dd636-029b-4474-abd2-864399927b41] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006083138s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tpzll" [4c990573-82b7-4c3e-aa76-d699dd353669] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004765793s
addons_test.go:342: (dbg) Run:  kubectl --context addons-009491 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-009491 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-009491 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.095899215s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-009491 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 ip
2024/09/06 18:43:02 [DEBUG] GET http://192.168.39.227:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-009491 -n addons-009491
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-214882                                                                     | download-only-214882 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-748579 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-748579                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43707                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-748579                                                                     | binary-mirror-748579 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-009491                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-009491                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-009491 --wait=true                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:33 UTC | 06 Sep 24 18:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-009491 addons                                                                        | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-009491 ssh cat                                                                       | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | /opt/local-path-provisioner/pvc-a78db530-dc97-4b7f-a847-310a42db2e7a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | -p addons-009491                                                                            |                      |         |         |                     |                     |
	| addons  | addons-009491 addons                                                                        | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | addons-009491                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | -p addons-009491                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-009491 addons                                                                        | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | addons-009491                                                                               |                      |         |         |                     |                     |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-009491 ssh curl -s                                                                   | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-009491 ip                                                                            | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ip      | addons-009491 ip                                                                            | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	| addons  | addons-009491 addons disable                                                                | addons-009491        | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:28
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:28.490702   13921 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:28.490905   13921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:28.490913   13921 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:28.490917   13921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:28.491129   13921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 18:29:28.491672   13921 out.go:352] Setting JSON to false
	I0906 18:29:28.492437   13921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":715,"bootTime":1725646653,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:28.492493   13921 start.go:139] virtualization: kvm guest
	I0906 18:29:28.494289   13921 out.go:177] * [addons-009491] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:29:28.495853   13921 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:29:28.495852   13921 notify.go:220] Checking for updates...
	I0906 18:29:28.497018   13921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:28.498129   13921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	I0906 18:29:28.499223   13921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:29:28.500306   13921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:29:28.501296   13921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:29:28.502478   13921 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:28.532313   13921 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 18:29:28.533316   13921 start.go:297] selected driver: kvm2
	I0906 18:29:28.533338   13921 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:29:28.533355   13921 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:29:28.533975   13921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:28.534046   13921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6054/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:29:28.548249   13921 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:29:28.548288   13921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:28.548466   13921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:29:28.548518   13921 cni.go:84] Creating CNI manager for ""
	I0906 18:29:28.548532   13921 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:29:28.548542   13921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:28.548592   13921 start.go:340] cluster config:
	{Name:addons-009491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-009491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoP
auseInterval:1m0s}
	I0906 18:29:28.548674   13921 iso.go:125] acquiring lock: {Name:mk05313ecb02befdc19949aecb1e2b6c72ebbece Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:28.550092   13921 out.go:177] * Starting "addons-009491" primary control-plane node in "addons-009491" cluster
	I0906 18:29:28.551079   13921 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:28.551109   13921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 18:29:28.551119   13921 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:28.551171   13921 preload.go:172] Found /home/jenkins/minikube-integration/19576-6054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 18:29:28.551180   13921 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 18:29:28.551440   13921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/config.json ...
	I0906 18:29:28.551456   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/config.json: {Name:mkf5c318e3e812a1cb8bcd919218a7d843392eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:28.551573   13921 start.go:360] acquireMachinesLock for addons-009491: {Name:mk7c9287f82cd3fe91f0c959d92300e34c3db633 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:29:28.551615   13921 start.go:364] duration metric: took 28.965µs to acquireMachinesLock for "addons-009491"
	I0906 18:29:28.551632   13921 start.go:93] Provisioning new machine with config: &{Name:addons-009491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-009491 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 18:29:28.551699   13921 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 18:29:28.553121   13921 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 18:29:28.553218   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:29:28.553253   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:29:28.566638   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0906 18:29:28.567038   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:29:28.567528   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:29:28.567548   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:29:28.567882   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:29:28.568041   13921 main.go:141] libmachine: (addons-009491) Calling .GetMachineName
	I0906 18:29:28.568183   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:28.568331   13921 start.go:159] libmachine.API.Create for "addons-009491" (driver="kvm2")
	I0906 18:29:28.568372   13921 client.go:168] LocalClient.Create starting
	I0906 18:29:28.568407   13921 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca.pem
	I0906 18:29:28.770263   13921 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/cert.pem
	I0906 18:29:29.136308   13921 main.go:141] libmachine: Running pre-create checks...
	I0906 18:29:29.136332   13921 main.go:141] libmachine: (addons-009491) Calling .PreCreateCheck
	I0906 18:29:29.136823   13921 main.go:141] libmachine: (addons-009491) Calling .GetConfigRaw
	I0906 18:29:29.137261   13921 main.go:141] libmachine: Creating machine...
	I0906 18:29:29.137274   13921 main.go:141] libmachine: (addons-009491) Calling .Create
	I0906 18:29:29.137414   13921 main.go:141] libmachine: (addons-009491) Creating KVM machine...
	I0906 18:29:29.138677   13921 main.go:141] libmachine: (addons-009491) DBG | found existing default KVM network
	I0906 18:29:29.139494   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:29.139316   13943 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0906 18:29:29.139530   13921 main.go:141] libmachine: (addons-009491) DBG | created network xml: 
	I0906 18:29:29.139547   13921 main.go:141] libmachine: (addons-009491) DBG | <network>
	I0906 18:29:29.139562   13921 main.go:141] libmachine: (addons-009491) DBG |   <name>mk-addons-009491</name>
	I0906 18:29:29.139576   13921 main.go:141] libmachine: (addons-009491) DBG |   <dns enable='no'/>
	I0906 18:29:29.139585   13921 main.go:141] libmachine: (addons-009491) DBG |   
	I0906 18:29:29.139593   13921 main.go:141] libmachine: (addons-009491) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 18:29:29.139601   13921 main.go:141] libmachine: (addons-009491) DBG |     <dhcp>
	I0906 18:29:29.139608   13921 main.go:141] libmachine: (addons-009491) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 18:29:29.139614   13921 main.go:141] libmachine: (addons-009491) DBG |     </dhcp>
	I0906 18:29:29.139620   13921 main.go:141] libmachine: (addons-009491) DBG |   </ip>
	I0906 18:29:29.139626   13921 main.go:141] libmachine: (addons-009491) DBG |   
	I0906 18:29:29.139632   13921 main.go:141] libmachine: (addons-009491) DBG | </network>
	I0906 18:29:29.139641   13921 main.go:141] libmachine: (addons-009491) DBG | 
	I0906 18:29:29.145146   13921 main.go:141] libmachine: (addons-009491) DBG | trying to create private KVM network mk-addons-009491 192.168.39.0/24...
	I0906 18:29:29.205259   13921 main.go:141] libmachine: (addons-009491) DBG | private KVM network mk-addons-009491 192.168.39.0/24 created
	I0906 18:29:29.205291   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:29.205193   13943 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:29:29.205319   13921 main.go:141] libmachine: (addons-009491) Setting up store path in /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491 ...
	I0906 18:29:29.205339   13921 main.go:141] libmachine: (addons-009491) Building disk image from file:///home/jenkins/minikube-integration/19576-6054/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:29:29.205358   13921 main.go:141] libmachine: (addons-009491) Downloading /home/jenkins/minikube-integration/19576-6054/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6054/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:29:29.451942   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:29.451837   13943 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa...
	I0906 18:29:29.649621   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:29.649509   13943 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/addons-009491.rawdisk...
	I0906 18:29:29.649647   13921 main.go:141] libmachine: (addons-009491) DBG | Writing magic tar header
	I0906 18:29:29.649673   13921 main.go:141] libmachine: (addons-009491) DBG | Writing SSH key tar header
	I0906 18:29:29.649694   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:29.649630   13943 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491 ...
	I0906 18:29:29.649772   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491
	I0906 18:29:29.649795   13921 main.go:141] libmachine: (addons-009491) Setting executable bit set on /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491 (perms=drwx------)
	I0906 18:29:29.649811   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6054/.minikube/machines
	I0906 18:29:29.649821   13921 main.go:141] libmachine: (addons-009491) Setting executable bit set on /home/jenkins/minikube-integration/19576-6054/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:29:29.649830   13921 main.go:141] libmachine: (addons-009491) Setting executable bit set on /home/jenkins/minikube-integration/19576-6054/.minikube (perms=drwxr-xr-x)
	I0906 18:29:29.649836   13921 main.go:141] libmachine: (addons-009491) Setting executable bit set on /home/jenkins/minikube-integration/19576-6054 (perms=drwxrwxr-x)
	I0906 18:29:29.649845   13921 main.go:141] libmachine: (addons-009491) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:29:29.649853   13921 main.go:141] libmachine: (addons-009491) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:29:29.649861   13921 main.go:141] libmachine: (addons-009491) Creating domain...
	I0906 18:29:29.649871   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:29:29.649880   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6054
	I0906 18:29:29.649887   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:29:29.649894   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:29:29.649900   13921 main.go:141] libmachine: (addons-009491) DBG | Checking permissions on dir: /home
	I0906 18:29:29.649906   13921 main.go:141] libmachine: (addons-009491) DBG | Skipping /home - not owner
	I0906 18:29:29.650879   13921 main.go:141] libmachine: (addons-009491) define libvirt domain using xml: 
	I0906 18:29:29.650904   13921 main.go:141] libmachine: (addons-009491) <domain type='kvm'>
	I0906 18:29:29.650926   13921 main.go:141] libmachine: (addons-009491)   <name>addons-009491</name>
	I0906 18:29:29.650943   13921 main.go:141] libmachine: (addons-009491)   <memory unit='MiB'>4000</memory>
	I0906 18:29:29.650956   13921 main.go:141] libmachine: (addons-009491)   <vcpu>2</vcpu>
	I0906 18:29:29.650962   13921 main.go:141] libmachine: (addons-009491)   <features>
	I0906 18:29:29.650970   13921 main.go:141] libmachine: (addons-009491)     <acpi/>
	I0906 18:29:29.650975   13921 main.go:141] libmachine: (addons-009491)     <apic/>
	I0906 18:29:29.651007   13921 main.go:141] libmachine: (addons-009491)     <pae/>
	I0906 18:29:29.651018   13921 main.go:141] libmachine: (addons-009491)     
	I0906 18:29:29.651036   13921 main.go:141] libmachine: (addons-009491)   </features>
	I0906 18:29:29.651046   13921 main.go:141] libmachine: (addons-009491)   <cpu mode='host-passthrough'>
	I0906 18:29:29.651054   13921 main.go:141] libmachine: (addons-009491)   
	I0906 18:29:29.651062   13921 main.go:141] libmachine: (addons-009491)   </cpu>
	I0906 18:29:29.651076   13921 main.go:141] libmachine: (addons-009491)   <os>
	I0906 18:29:29.651090   13921 main.go:141] libmachine: (addons-009491)     <type>hvm</type>
	I0906 18:29:29.651102   13921 main.go:141] libmachine: (addons-009491)     <boot dev='cdrom'/>
	I0906 18:29:29.651113   13921 main.go:141] libmachine: (addons-009491)     <boot dev='hd'/>
	I0906 18:29:29.651125   13921 main.go:141] libmachine: (addons-009491)     <bootmenu enable='no'/>
	I0906 18:29:29.651135   13921 main.go:141] libmachine: (addons-009491)   </os>
	I0906 18:29:29.651147   13921 main.go:141] libmachine: (addons-009491)   <devices>
	I0906 18:29:29.651158   13921 main.go:141] libmachine: (addons-009491)     <disk type='file' device='cdrom'>
	I0906 18:29:29.651176   13921 main.go:141] libmachine: (addons-009491)       <source file='/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/boot2docker.iso'/>
	I0906 18:29:29.651187   13921 main.go:141] libmachine: (addons-009491)       <target dev='hdc' bus='scsi'/>
	I0906 18:29:29.651207   13921 main.go:141] libmachine: (addons-009491)       <readonly/>
	I0906 18:29:29.651218   13921 main.go:141] libmachine: (addons-009491)     </disk>
	I0906 18:29:29.651227   13921 main.go:141] libmachine: (addons-009491)     <disk type='file' device='disk'>
	I0906 18:29:29.651234   13921 main.go:141] libmachine: (addons-009491)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:29:29.651245   13921 main.go:141] libmachine: (addons-009491)       <source file='/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/addons-009491.rawdisk'/>
	I0906 18:29:29.651253   13921 main.go:141] libmachine: (addons-009491)       <target dev='hda' bus='virtio'/>
	I0906 18:29:29.651260   13921 main.go:141] libmachine: (addons-009491)     </disk>
	I0906 18:29:29.651266   13921 main.go:141] libmachine: (addons-009491)     <interface type='network'>
	I0906 18:29:29.651274   13921 main.go:141] libmachine: (addons-009491)       <source network='mk-addons-009491'/>
	I0906 18:29:29.651281   13921 main.go:141] libmachine: (addons-009491)       <model type='virtio'/>
	I0906 18:29:29.651286   13921 main.go:141] libmachine: (addons-009491)     </interface>
	I0906 18:29:29.651294   13921 main.go:141] libmachine: (addons-009491)     <interface type='network'>
	I0906 18:29:29.651302   13921 main.go:141] libmachine: (addons-009491)       <source network='default'/>
	I0906 18:29:29.651310   13921 main.go:141] libmachine: (addons-009491)       <model type='virtio'/>
	I0906 18:29:29.651315   13921 main.go:141] libmachine: (addons-009491)     </interface>
	I0906 18:29:29.651322   13921 main.go:141] libmachine: (addons-009491)     <serial type='pty'>
	I0906 18:29:29.651327   13921 main.go:141] libmachine: (addons-009491)       <target port='0'/>
	I0906 18:29:29.651333   13921 main.go:141] libmachine: (addons-009491)     </serial>
	I0906 18:29:29.651338   13921 main.go:141] libmachine: (addons-009491)     <console type='pty'>
	I0906 18:29:29.651345   13921 main.go:141] libmachine: (addons-009491)       <target type='serial' port='0'/>
	I0906 18:29:29.651351   13921 main.go:141] libmachine: (addons-009491)     </console>
	I0906 18:29:29.651357   13921 main.go:141] libmachine: (addons-009491)     <rng model='virtio'>
	I0906 18:29:29.651364   13921 main.go:141] libmachine: (addons-009491)       <backend model='random'>/dev/random</backend>
	I0906 18:29:29.651373   13921 main.go:141] libmachine: (addons-009491)     </rng>
	I0906 18:29:29.651380   13921 main.go:141] libmachine: (addons-009491)     
	I0906 18:29:29.651386   13921 main.go:141] libmachine: (addons-009491)     
	I0906 18:29:29.651414   13921 main.go:141] libmachine: (addons-009491)   </devices>
	I0906 18:29:29.651437   13921 main.go:141] libmachine: (addons-009491) </domain>
	I0906 18:29:29.651453   13921 main.go:141] libmachine: (addons-009491) 
	I0906 18:29:29.656653   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:7d:7a:cf in network default
	I0906 18:29:29.657203   13921 main.go:141] libmachine: (addons-009491) Ensuring networks are active...
	I0906 18:29:29.657222   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:29.657818   13921 main.go:141] libmachine: (addons-009491) Ensuring network default is active
	I0906 18:29:29.658144   13921 main.go:141] libmachine: (addons-009491) Ensuring network mk-addons-009491 is active
	I0906 18:29:29.658694   13921 main.go:141] libmachine: (addons-009491) Getting domain xml...
	I0906 18:29:29.659392   13921 main.go:141] libmachine: (addons-009491) Creating domain...
	I0906 18:29:31.028080   13921 main.go:141] libmachine: (addons-009491) Waiting to get IP...
	I0906 18:29:31.028898   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:31.029281   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:31.029337   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:31.029272   13943 retry.go:31] will retry after 275.375983ms: waiting for machine to come up
	I0906 18:29:31.306644   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:31.307081   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:31.307109   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:31.307040   13943 retry.go:31] will retry after 328.414552ms: waiting for machine to come up
	I0906 18:29:31.637446   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:31.637811   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:31.637838   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:31.637772   13943 retry.go:31] will retry after 468.365424ms: waiting for machine to come up
	I0906 18:29:32.107269   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:32.107643   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:32.107661   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:32.107612   13943 retry.go:31] will retry after 529.490186ms: waiting for machine to come up
	I0906 18:29:32.638078   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:32.638466   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:32.638492   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:32.638426   13943 retry.go:31] will retry after 714.605446ms: waiting for machine to come up
	I0906 18:29:33.354301   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:33.354739   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:33.354764   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:33.354707   13943 retry.go:31] will retry after 944.824961ms: waiting for machine to come up
	I0906 18:29:34.301122   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:34.301648   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:34.301671   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:34.301613   13943 retry.go:31] will retry after 826.046879ms: waiting for machine to come up
	I0906 18:29:35.129583   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:35.129967   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:35.129994   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:35.129929   13943 retry.go:31] will retry after 1.449137482s: waiting for machine to come up
	I0906 18:29:36.581092   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:36.581467   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:36.581510   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:36.581437   13943 retry.go:31] will retry after 1.723829806s: waiting for machine to come up
	I0906 18:29:38.307345   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:38.307773   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:38.307798   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:38.307730   13943 retry.go:31] will retry after 1.625962135s: waiting for machine to come up
	I0906 18:29:39.935128   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:39.935594   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:39.935618   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:39.935560   13943 retry.go:31] will retry after 2.632500297s: waiting for machine to come up
	I0906 18:29:42.571260   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:42.571608   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:42.571639   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:42.571590   13943 retry.go:31] will retry after 3.177575995s: waiting for machine to come up
	I0906 18:29:45.752222   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:45.752720   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find current IP address of domain addons-009491 in network mk-addons-009491
	I0906 18:29:45.752747   13921 main.go:141] libmachine: (addons-009491) DBG | I0906 18:29:45.752673   13943 retry.go:31] will retry after 4.498149275s: waiting for machine to come up
	I0906 18:29:50.251971   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.252374   13921 main.go:141] libmachine: (addons-009491) Found IP for machine: 192.168.39.227
	I0906 18:29:50.252413   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has current primary IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.252425   13921 main.go:141] libmachine: (addons-009491) Reserving static IP address...
	I0906 18:29:50.252711   13921 main.go:141] libmachine: (addons-009491) DBG | unable to find host DHCP lease matching {name: "addons-009491", mac: "52:54:00:47:6e:9e", ip: "192.168.39.227"} in network mk-addons-009491
	I0906 18:29:50.346952   13921 main.go:141] libmachine: (addons-009491) Reserved static IP address: 192.168.39.227
	I0906 18:29:50.346981   13921 main.go:141] libmachine: (addons-009491) Waiting for SSH to be available...
	I0906 18:29:50.347004   13921 main.go:141] libmachine: (addons-009491) DBG | Getting to WaitForSSH function...
	I0906 18:29:50.349361   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.349711   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.349734   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.349910   13921 main.go:141] libmachine: (addons-009491) DBG | Using SSH client type: external
	I0906 18:29:50.349937   13921 main.go:141] libmachine: (addons-009491) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa (-rw-------)
	I0906 18:29:50.349965   13921 main.go:141] libmachine: (addons-009491) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:50.349980   13921 main.go:141] libmachine: (addons-009491) DBG | About to run SSH command:
	I0906 18:29:50.349994   13921 main.go:141] libmachine: (addons-009491) DBG | exit 0
	I0906 18:29:50.478862   13921 main.go:141] libmachine: (addons-009491) DBG | SSH cmd err, output: <nil>: 
	I0906 18:29:50.479145   13921 main.go:141] libmachine: (addons-009491) KVM machine creation complete!
	I0906 18:29:50.479439   13921 main.go:141] libmachine: (addons-009491) Calling .GetConfigRaw
	I0906 18:29:50.481066   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:50.481267   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:50.481426   13921 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:29:50.481438   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:29:50.482644   13921 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:29:50.482657   13921 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:29:50.482663   13921 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:29:50.482668   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:50.484914   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.485289   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.485315   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.485473   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:50.485647   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.485789   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.485913   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:50.486076   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:50.486282   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:50.486298   13921 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:29:50.586171   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:50.586193   13921 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:29:50.586200   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:50.589125   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.589451   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.589480   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.589633   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:50.589813   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.589962   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.590104   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:50.590254   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:50.590439   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:50.590454   13921 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:29:50.691450   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:29:50.691544   13921 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:29:50.691561   13921 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:29:50.691575   13921 main.go:141] libmachine: (addons-009491) Calling .GetMachineName
	I0906 18:29:50.691801   13921 buildroot.go:166] provisioning hostname "addons-009491"
	I0906 18:29:50.691821   13921 main.go:141] libmachine: (addons-009491) Calling .GetMachineName
	I0906 18:29:50.692035   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:50.694446   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.694779   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.694803   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.694935   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:50.695124   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.695238   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.695369   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:50.695520   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:50.695689   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:50.695705   13921 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-009491 && echo "addons-009491" | sudo tee /etc/hostname
	I0906 18:29:50.818694   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-009491
	
	I0906 18:29:50.818723   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:50.821240   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.821607   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.821640   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.821788   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:50.821973   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.822121   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:50.822251   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:50.822436   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:50.822598   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:50.822614   13921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-009491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-009491/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-009491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:29:50.937347   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:50.937382   13921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6054/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6054/.minikube}
	I0906 18:29:50.937409   13921 buildroot.go:174] setting up certificates
	I0906 18:29:50.937423   13921 provision.go:84] configureAuth start
	I0906 18:29:50.937432   13921 main.go:141] libmachine: (addons-009491) Calling .GetMachineName
	I0906 18:29:50.937687   13921 main.go:141] libmachine: (addons-009491) Calling .GetIP
	I0906 18:29:50.939651   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.939992   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.940029   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.940191   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:50.942312   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.942644   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:50.942668   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:50.942801   13921 provision.go:143] copyHostCerts
	I0906 18:29:50.942867   13921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6054/.minikube/ca.pem (1078 bytes)
	I0906 18:29:50.943009   13921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6054/.minikube/cert.pem (1123 bytes)
	I0906 18:29:50.943098   13921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6054/.minikube/key.pem (1679 bytes)
	I0906 18:29:50.943162   13921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca-key.pem org=jenkins.addons-009491 san=[127.0.0.1 192.168.39.227 addons-009491 localhost minikube]
	I0906 18:29:51.103175   13921 provision.go:177] copyRemoteCerts
	I0906 18:29:51.103230   13921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:29:51.103251   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:51.105549   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.105952   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:51.105982   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.106169   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:51.106343   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.106472   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:51.106605   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:29:51.184203   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:29:51.207537   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:29:51.229930   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 18:29:51.252167   13921 provision.go:87] duration metric: took 314.73306ms to configureAuth
	I0906 18:29:51.252202   13921 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:29:51.252406   13921 config.go:182] Loaded profile config "addons-009491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:29:51.252432   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:51.252680   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:51.254928   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.255289   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:51.255329   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.255467   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:51.255620   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.255761   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.255855   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:51.256035   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:51.256219   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:51.256234   13921 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 18:29:51.360272   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 18:29:51.360301   13921 buildroot.go:70] root file system type: tmpfs
	I0906 18:29:51.360458   13921 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 18:29:51.360489   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:51.362820   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.363135   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:51.363159   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.363341   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:51.363501   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.363654   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.363749   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:51.363883   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:51.364072   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:51.364168   13921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 18:29:51.476851   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 18:29:51.476877   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:51.479403   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.479743   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:51.479788   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:51.479956   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:51.480145   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.480285   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:51.480511   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:51.480770   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:51.480948   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:51.480972   13921 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 18:29:53.252040   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 18:29:53.252085   13921 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:29:53.252097   13921 main.go:141] libmachine: (addons-009491) Calling .GetURL
	I0906 18:29:53.253217   13921 main.go:141] libmachine: (addons-009491) DBG | Using libvirt version 6000000
	I0906 18:29:53.255265   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.255602   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.255628   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.255750   13921 main.go:141] libmachine: Docker is up and running!
	I0906 18:29:53.255764   13921 main.go:141] libmachine: Reticulating splines...
	I0906 18:29:53.255772   13921 client.go:171] duration metric: took 24.687390918s to LocalClient.Create
	I0906 18:29:53.255798   13921 start.go:167] duration metric: took 24.687467357s to libmachine.API.Create "addons-009491"
	I0906 18:29:53.255808   13921 start.go:293] postStartSetup for "addons-009491" (driver="kvm2")
	I0906 18:29:53.255818   13921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:29:53.255831   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:53.256041   13921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:29:53.256077   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:53.257937   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.258239   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.258263   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.258399   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:53.258570   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:53.258716   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:53.258859   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:29:53.336926   13921 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:29:53.341081   13921 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:29:53.341102   13921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6054/.minikube/addons for local assets ...
	I0906 18:29:53.341181   13921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6054/.minikube/files for local assets ...
	I0906 18:29:53.341226   13921 start.go:296] duration metric: took 85.4117ms for postStartSetup
	I0906 18:29:53.341260   13921 main.go:141] libmachine: (addons-009491) Calling .GetConfigRaw
	I0906 18:29:53.341764   13921 main.go:141] libmachine: (addons-009491) Calling .GetIP
	I0906 18:29:53.344023   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.344357   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.344387   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.344594   13921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/config.json ...
	I0906 18:29:53.344845   13921 start.go:128] duration metric: took 24.793136224s to createHost
	I0906 18:29:53.344866   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:53.347069   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.347373   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.347400   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.347571   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:53.347727   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:53.347874   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:53.347970   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:53.348111   13921 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:53.348266   13921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0906 18:29:53.348276   13921 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:29:53.447431   13921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647393.406189438
	
	I0906 18:29:53.447457   13921 fix.go:216] guest clock: 1725647393.406189438
	I0906 18:29:53.447468   13921 fix.go:229] Guest: 2024-09-06 18:29:53.406189438 +0000 UTC Remote: 2024-09-06 18:29:53.344856633 +0000 UTC m=+24.887106523 (delta=61.332805ms)
	I0906 18:29:53.447510   13921 fix.go:200] guest clock delta is within tolerance: 61.332805ms
	I0906 18:29:53.447517   13921 start.go:83] releasing machines lock for "addons-009491", held for 24.895892063s
	I0906 18:29:53.447543   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:53.447786   13921 main.go:141] libmachine: (addons-009491) Calling .GetIP
	I0906 18:29:53.450289   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.450629   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.450662   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.450832   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:53.451344   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:53.451520   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:29:53.451613   13921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:29:53.451656   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:53.451707   13921 ssh_runner.go:195] Run: cat /version.json
	I0906 18:29:53.451730   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:29:53.454334   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.454354   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.454662   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.454690   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.454712   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:53.454728   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:53.454810   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:53.454909   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:29:53.454987   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:53.455066   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:29:53.455132   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:53.455178   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:29:53.455231   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:29:53.455290   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:29:53.554622   13921 ssh_runner.go:195] Run: systemctl --version
	I0906 18:29:53.560297   13921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:29:53.565497   13921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:29:53.565550   13921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:29:53.582437   13921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:29:53.582458   13921 start.go:495] detecting cgroup driver to use...
	I0906 18:29:53.582578   13921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:29:53.600013   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 18:29:53.610397   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 18:29:53.620722   13921 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 18:29:53.620783   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 18:29:53.631002   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:29:53.641212   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 18:29:53.651405   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:29:53.661592   13921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:29:53.671976   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 18:29:53.682151   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 18:29:53.692368   13921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 18:29:53.703008   13921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:29:53.712338   13921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:29:53.721346   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:53.824879   13921 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 18:29:53.848474   13921 start.go:495] detecting cgroup driver to use...
	I0906 18:29:53.848560   13921 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 18:29:53.863391   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:29:53.878167   13921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:29:53.899762   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:29:53.911892   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 18:29:53.926303   13921 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 18:29:53.962095   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 18:29:53.974865   13921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:29:53.992571   13921 ssh_runner.go:195] Run: which cri-dockerd
	I0906 18:29:53.996133   13921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 18:29:54.005130   13921 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 18:29:54.023177   13921 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 18:29:54.142962   13921 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 18:29:54.264202   13921 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 18:29:54.264340   13921 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 18:29:54.281264   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:54.395801   13921 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 18:29:57.191577   13921 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.795724122s)
	I0906 18:29:57.191671   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 18:29:57.204693   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 18:29:57.216838   13921 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 18:29:57.324394   13921 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 18:29:57.435703   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:57.558766   13921 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 18:29:57.575769   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 18:29:57.588332   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:57.697412   13921 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 18:29:57.773476   13921 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 18:29:57.773566   13921 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 18:29:57.779520   13921 start.go:563] Will wait 60s for crictl version
	I0906 18:29:57.779565   13921 ssh_runner.go:195] Run: which crictl
	I0906 18:29:57.783497   13921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:29:57.818882   13921 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 18:29:57.818941   13921 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 18:29:57.847377   13921 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 18:29:57.873234   13921 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 18:29:57.873269   13921 main.go:141] libmachine: (addons-009491) Calling .GetIP
	I0906 18:29:57.875825   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:57.876247   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:29:57.876271   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:29:57.876469   13921 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:29:57.880216   13921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:29:57.892349   13921 kubeadm.go:883] updating cluster {Name:addons-009491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-009491 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:29:57.892434   13921 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:57.892484   13921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 18:29:57.908810   13921 docker.go:685] Got preloaded images: 
	I0906 18:29:57.908830   13921 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0906 18:29:57.908872   13921 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 18:29:57.918117   13921 ssh_runner.go:195] Run: which lz4
	I0906 18:29:57.922299   13921 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 18:29:57.927039   13921 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 18:29:57.927063   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0906 18:29:59.068175   13921 docker.go:649] duration metric: took 1.146322079s to copy over tarball
	I0906 18:29:59.068241   13921 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 18:30:00.909522   13921 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.841250096s)
	I0906 18:30:00.909557   13921 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 18:30:00.942691   13921 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 18:30:00.953116   13921 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0906 18:30:00.971631   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:01.085701   13921 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 18:30:05.282568   13921 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.196832387s)
	I0906 18:30:05.282670   13921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 18:30:05.301225   13921 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 18:30:05.301248   13921 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:30:05.301271   13921 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 docker true true} ...
	I0906 18:30:05.301381   13921 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-009491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-009491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:30:05.301434   13921 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 18:30:05.357037   13921 cni.go:84] Creating CNI manager for ""
	I0906 18:30:05.357066   13921 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:05.357090   13921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:05.357117   13921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-009491 NodeName:addons-009491 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:05.357261   13921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-009491"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:05.357328   13921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:05.367099   13921 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:30:05.367165   13921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:05.376212   13921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0906 18:30:05.392082   13921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:05.407652   13921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 18:30:05.423096   13921 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:05.426871   13921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:05.438151   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:05.546159   13921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:05.565972   13921 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491 for IP: 192.168.39.227
	I0906 18:30:05.565992   13921 certs.go:194] generating shared ca certs ...
	I0906 18:30:05.566006   13921 certs.go:226] acquiring lock for ca certs: {Name:mkd2f37b6c6e709fa7b542b3dd1e328e393e43d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.566163   13921 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6054/.minikube/ca.key
	I0906 18:30:05.760376   13921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6054/.minikube/ca.crt ...
	I0906 18:30:05.760402   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/ca.crt: {Name:mk4f74e65569d57e3eef6bc702d540e69007c556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.760592   13921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6054/.minikube/ca.key ...
	I0906 18:30:05.760610   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/ca.key: {Name:mkef22688a309aed939ca86dcd01b4fea5c83118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.760717   13921 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.key
	I0906 18:30:05.838632   13921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.crt ...
	I0906 18:30:05.838657   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.crt: {Name:mk44fe1239f8f600522cb1e82a3fdabe68501138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.838925   13921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.key ...
	I0906 18:30:05.838949   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.key: {Name:mk207fd3b3a0d90768e06c1988efd40b4b6727dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.839097   13921 certs.go:256] generating profile certs ...
	I0906 18:30:05.839157   13921 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.key
	I0906 18:30:05.839179   13921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt with IP's: []
	I0906 18:30:05.943564   13921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt ...
	I0906 18:30:05.943595   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: {Name:mk8847e6be213baf2301c14ea7d49bcc9a3619b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.943795   13921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.key ...
	I0906 18:30:05.943811   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.key: {Name:mka2c5e554c7744cac137a586453af6b965c0d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:05.943937   13921 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.key.bc281775
	I0906 18:30:05.943969   13921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.crt.bc281775 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227]
	I0906 18:30:06.127559   13921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.crt.bc281775 ...
	I0906 18:30:06.127586   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.crt.bc281775: {Name:mk1cb1eefdd89dc4394297bd70813f854a4dea4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:06.127763   13921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.key.bc281775 ...
	I0906 18:30:06.127780   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.key.bc281775: {Name:mk1d1963c12c50e7d57ff24ad33c76fb91bcb414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:06.127878   13921 certs.go:381] copying /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.crt.bc281775 -> /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.crt
	I0906 18:30:06.127976   13921 certs.go:385] copying /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.key.bc281775 -> /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.key
	I0906 18:30:06.128046   13921 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.key
	I0906 18:30:06.128069   13921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.crt with IP's: []
	I0906 18:30:06.339468   13921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.crt ...
	I0906 18:30:06.339497   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.crt: {Name:mk5364f0f5b59f3b0bb61d18a5dd685e6c75ed0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:06.339666   13921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.key ...
	I0906 18:30:06.339684   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.key: {Name:mk2df8f9acb0f63770ecdf94df03a28aa01999a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:06.339908   13921 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 18:30:06.339949   13921 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:30:06.339983   13921 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:06.340013   13921 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6054/.minikube/certs/key.pem (1679 bytes)
	I0906 18:30:06.340994   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:06.375089   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 18:30:06.408261   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:06.432799   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0906 18:30:06.456334   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 18:30:06.479930   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 18:30:06.503149   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:06.526795   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 18:30:06.549452   13921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:06.571851   13921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:06.587406   13921 ssh_runner.go:195] Run: openssl version
	I0906 18:30:06.592908   13921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:06.602826   13921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:06.606934   13921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:06.606969   13921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:06.612383   13921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:06.622894   13921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:06.627032   13921 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:06.627079   13921 kubeadm.go:392] StartCluster: {Name:addons-009491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-009491 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:06.627183   13921 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 18:30:06.642804   13921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:06.652071   13921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:06.660857   13921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:06.669697   13921 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:06.669710   13921 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:06.669740   13921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:06.678127   13921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:06.678172   13921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:06.686759   13921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:06.695001   13921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:06.695042   13921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:06.703488   13921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:06.711850   13921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:06.711885   13921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:06.720364   13921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:06.728421   13921 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:06.728457   13921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:06.736867   13921 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:30:06.781715   13921 kubeadm.go:310] W0906 18:30:06.739130    1503 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:06.782296   13921 kubeadm.go:310] W0906 18:30:06.739901    1503 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:06.899354   13921 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:16.952056   13921 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:16.952128   13921 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:16.952236   13921 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:16.952389   13921 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:16.952523   13921 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:16.952594   13921 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:16.953835   13921 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:16.953937   13921 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:16.954016   13921 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:16.954079   13921 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:16.954126   13921 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:16.954196   13921 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:16.954265   13921 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:16.954315   13921 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:16.954452   13921 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-009491 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0906 18:30:16.954511   13921 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:16.954640   13921 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-009491 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0906 18:30:16.954696   13921 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:16.954752   13921 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:16.954798   13921 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:16.954848   13921 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:16.954892   13921 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:16.954938   13921 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:16.954984   13921 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:16.955058   13921 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:16.955106   13921 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:16.955192   13921 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:16.955275   13921 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:16.957099   13921 out.go:235]   - Booting up control plane ...
	I0906 18:30:16.957187   13921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:16.957288   13921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:16.957357   13921 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:16.957491   13921 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:16.957575   13921 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:16.957614   13921 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:16.957720   13921 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:16.957826   13921 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:16.957883   13921 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.484542ms
	I0906 18:30:16.957960   13921 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:16.958011   13921 kubeadm.go:310] [api-check] The API server is healthy after 5.002383042s
	I0906 18:30:16.958106   13921 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:16.958250   13921 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:16.958330   13921 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:16.958560   13921 kubeadm.go:310] [mark-control-plane] Marking the node addons-009491 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:16.958641   13921 kubeadm.go:310] [bootstrap-token] Using token: zp8qdg.rldwt3pi5q2vykqm
	I0906 18:30:16.959875   13921 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:16.960003   13921 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:16.960112   13921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:16.960329   13921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:16.960503   13921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:16.960601   13921 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:16.960681   13921 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:16.960783   13921 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:16.960821   13921 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:16.960866   13921 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:16.960880   13921 kubeadm.go:310] 
	I0906 18:30:16.960939   13921 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:16.960946   13921 kubeadm.go:310] 
	I0906 18:30:16.961047   13921 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:16.961057   13921 kubeadm.go:310] 
	I0906 18:30:16.961077   13921 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:16.961133   13921 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:16.961204   13921 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:16.961213   13921 kubeadm.go:310] 
	I0906 18:30:16.961298   13921 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:16.961309   13921 kubeadm.go:310] 
	I0906 18:30:16.961374   13921 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:16.961383   13921 kubeadm.go:310] 
	I0906 18:30:16.961461   13921 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:16.961557   13921 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:16.961619   13921 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:16.961626   13921 kubeadm.go:310] 
	I0906 18:30:16.961697   13921 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:16.961765   13921 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:16.961770   13921 kubeadm.go:310] 
	I0906 18:30:16.961851   13921 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zp8qdg.rldwt3pi5q2vykqm \
	I0906 18:30:16.961993   13921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f214ac3894973eca49f6386ca1501a5c5d62b290c3ffbebfe2d4883672b1931e \
	I0906 18:30:16.962019   13921 kubeadm.go:310] 	--control-plane 
	I0906 18:30:16.962024   13921 kubeadm.go:310] 
	I0906 18:30:16.962099   13921 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:16.962106   13921 kubeadm.go:310] 
	I0906 18:30:16.962185   13921 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zp8qdg.rldwt3pi5q2vykqm \
	I0906 18:30:16.962310   13921 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f214ac3894973eca49f6386ca1501a5c5d62b290c3ffbebfe2d4883672b1931e 
	I0906 18:30:16.962324   13921 cni.go:84] Creating CNI manager for ""
	I0906 18:30:16.962341   13921 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:16.963598   13921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 18:30:16.964629   13921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 18:30:16.976156   13921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 18:30:16.994714   13921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:16.994810   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.994960   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-009491 minikube.k8s.io/updated_at=2024_09_06T18_30_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-009491 minikube.k8s.io/primary=true
	I0906 18:30:17.097708   13921 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:17.097870   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.598740   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:18.098546   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:18.597938   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:19.098476   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:19.598486   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:20.098507   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:20.597922   13921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:20.692553   13921 kubeadm.go:1113] duration metric: took 3.697806918s to wait for elevateKubeSystemPrivileges
	I0906 18:30:20.692595   13921 kubeadm.go:394] duration metric: took 14.06551814s to StartCluster
	I0906 18:30:20.692618   13921 settings.go:142] acquiring lock: {Name:mkad984b3b48e1128587c543c6007edeec04a7f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:20.692757   13921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6054/kubeconfig
	I0906 18:30:20.693286   13921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6054/kubeconfig: {Name:mk43292b50da11df939d5ed6d6816fae8df77886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:20.693492   13921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:20.693522   13921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 18:30:20.693576   13921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:20.693676   13921 addons.go:69] Setting cloud-spanner=true in profile "addons-009491"
	I0906 18:30:20.693693   13921 addons.go:69] Setting inspektor-gadget=true in profile "addons-009491"
	I0906 18:30:20.693705   13921 config.go:182] Loaded profile config "addons-009491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:20.693713   13921 addons.go:69] Setting storage-provisioner=true in profile "addons-009491"
	I0906 18:30:20.693735   13921 addons.go:69] Setting volcano=true in profile "addons-009491"
	I0906 18:30:20.693731   13921 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-009491"
	I0906 18:30:20.693749   13921 addons.go:69] Setting default-storageclass=true in profile "addons-009491"
	I0906 18:30:20.693759   13921 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-009491"
	I0906 18:30:20.693762   13921 addons.go:69] Setting volumesnapshots=true in profile "addons-009491"
	I0906 18:30:20.693762   13921 addons.go:69] Setting registry=true in profile "addons-009491"
	I0906 18:30:20.693764   13921 addons.go:69] Setting gcp-auth=true in profile "addons-009491"
	I0906 18:30:20.693778   13921 addons.go:234] Setting addon volumesnapshots=true in "addons-009491"
	I0906 18:30:20.693782   13921 addons.go:234] Setting addon registry=true in "addons-009491"
	I0906 18:30:20.693784   13921 mustload.go:65] Loading cluster: addons-009491
	I0906 18:30:20.693793   13921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-009491"
	I0906 18:30:20.693809   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.693809   13921 addons.go:69] Setting metrics-server=true in profile "addons-009491"
	I0906 18:30:20.693812   13921 addons.go:69] Setting ingress=true in profile "addons-009491"
	I0906 18:30:20.693830   13921 addons.go:234] Setting addon metrics-server=true in "addons-009491"
	I0906 18:30:20.693830   13921 addons.go:234] Setting addon ingress=true in "addons-009491"
	I0906 18:30:20.693849   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.693860   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.693933   13921 config.go:182] Loaded profile config "addons-009491": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:20.693754   13921 addons.go:234] Setting addon volcano=true in "addons-009491"
	I0906 18:30:20.694261   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.694275   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.694285   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.694300   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.694315   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.694317   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.694323   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.694332   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.693799   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.694430   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.694260   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.693727   13921 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-009491"
	I0906 18:30:20.694520   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.693719   13921 addons.go:234] Setting addon inspektor-gadget=true in "addons-009491"
	I0906 18:30:20.693738   13921 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-009491"
	I0906 18:30:20.694607   13921 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-009491"
	I0906 18:30:20.693754   13921 addons.go:234] Setting addon storage-provisioner=true in "addons-009491"
	I0906 18:30:20.693720   13921 addons.go:234] Setting addon cloud-spanner=true in "addons-009491"
	I0906 18:30:20.693753   13921 addons.go:69] Setting helm-tiller=true in profile "addons-009491"
	I0906 18:30:20.694638   13921 addons.go:234] Setting addon helm-tiller=true in "addons-009491"
	I0906 18:30:20.694665   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.693682   13921 addons.go:69] Setting yakd=true in profile "addons-009491"
	I0906 18:30:20.694762   13921 addons.go:234] Setting addon yakd=true in "addons-009491"
	I0906 18:30:20.694779   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.694788   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.694787   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.693801   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.694872   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.695039   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695076   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695124   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695138   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695157   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695167   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.694763   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.694535   13921 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-009491"
	I0906 18:30:20.695253   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695302   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695351   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695126   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695428   13921 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:20.693719   13921 addons.go:69] Setting ingress-dns=true in profile "addons-009491"
	I0906 18:30:20.695484   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695507   13921 addons.go:234] Setting addon ingress-dns=true in "addons-009491"
	I0906 18:30:20.695512   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695542   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695550   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.695433   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695577   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695545   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.695379   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.694735   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.696156   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.696218   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.696244   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.695569   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.700625   13921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:20.716178   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0906 18:30:20.716194   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
	I0906 18:30:20.716177   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0906 18:30:20.716195   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0906 18:30:20.716651   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.716761   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.716775   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.717000   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.717185   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.717197   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.717342   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.717352   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.717399   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.717416   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.717781   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.717840   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.717841   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.717866   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.717876   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.718254   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.718329   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.718359   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.719221   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.719264   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.723214   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.723239   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.723560   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.723577   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.739428   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35545
	I0906 18:30:20.739624   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0906 18:30:20.740348   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.740465   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.740966   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.740985   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.741123   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.741133   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.741347   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.742003   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.742038   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.742172   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.742362   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.744702   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.745008   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0906 18:30:20.745032   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0906 18:30:20.745116   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.745140   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.745419   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.745556   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.745930   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.745946   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.746061   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.746082   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.746309   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.746457   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.746523   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.746973   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.747027   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.747430   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0906 18:30:20.747683   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0906 18:30:20.748001   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.748228   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.748919   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.749025   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.749045   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.749458   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.749649   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.750167   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.750184   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.750617   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.750822   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.750897   13921 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:20.751330   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.752267   13921 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:20.752286   13921 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:20.752307   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.753275   13921 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 18:30:20.754602   13921 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 18:30:20.756988   13921 addons.go:234] Setting addon default-storageclass=true in "addons-009491"
	I0906 18:30:20.757030   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.757394   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.757437   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.757661   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.757662   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.757692   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.757712   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.757801   13921 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 18:30:20.757910   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.758078   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.758220   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.760198   13921 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:20.760223   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 18:30:20.760240   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.761184   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0906 18:30:20.761737   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.762341   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.762367   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.762872   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.764044   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.764086   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.764276   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.764430   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.764447   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.764709   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.764875   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.764989   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.765099   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.772521   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I0906 18:30:20.775298   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.775842   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.775860   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.776510   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.777040   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.777072   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.777270   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I0906 18:30:20.777435   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45023
	I0906 18:30:20.777570   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.777779   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.778005   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.778024   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.778219   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.778233   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.778573   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.778750   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.779477   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.780214   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.780238   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.780459   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0906 18:30:20.780870   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.781457   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.781477   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.781979   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.782182   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.782255   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.784034   13921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:20.784972   13921 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-009491"
	I0906 18:30:20.785017   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:20.785260   13921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:20.785274   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:20.785295   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.785402   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.785443   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.786180   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0906 18:30:20.787588   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.788144   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.788170   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.788618   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.788691   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.789216   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.789239   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.789430   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.789481   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.789514   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.789618   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.789789   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.789931   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.793554   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0906 18:30:20.793970   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.794514   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.794530   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.794917   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.795824   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.795861   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.797900   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42331
	I0906 18:30:20.798500   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.798594   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I0906 18:30:20.799106   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.799125   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.799189   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.799668   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.799684   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.799728   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.800538   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.800583   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.800745   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.802481   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.802963   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.804138   13921 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 18:30:20.804933   13921 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:20.806099   13921 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:20.806118   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 18:30:20.806137   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.807559   13921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:20.807577   13921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:20.807596   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.808353   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0906 18:30:20.808933   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.809025   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0906 18:30:20.809534   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.809601   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.809617   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.809621   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.809949   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.810072   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.810084   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.810656   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.810692   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.810893   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.811096   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.811696   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.811725   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.812486   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.812689   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.812890   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.812949   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.813338   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.813667   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.813691   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.813721   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.813763   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.813833   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0906 18:30:20.813981   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.814054   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0906 18:30:20.814188   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.814341   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.814604   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.814859   13921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:20.815625   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.815656   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.816066   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.816141   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0906 18:30:20.816296   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.816855   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.816948   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.817466   13921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 18:30:20.817673   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.817691   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.818247   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.818263   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.818318   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.818532   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.818654   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.818699   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.818746   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0906 18:30:20.819214   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.819810   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.819848   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.820049   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.820306   13921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:20.820317   13921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:20.820341   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.820454   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.820470   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.821048   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.821260   13921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:20.821357   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.821408   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:20.822764   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:20.822819   13921 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:20.822838   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 18:30:20.822854   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.823504   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I0906 18:30:20.824100   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.824601   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.824618   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.824672   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.824994   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.825186   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:20.825554   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.825583   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.825773   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.825788   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.825816   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.826021   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.826085   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.826400   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.826551   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.827504   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.827557   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46479
	I0906 18:30:20.827860   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:20.827865   13921 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 18:30:20.828164   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.828635   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.828652   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.828722   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.828735   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.828978   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.829020   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.829175   13921 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 18:30:20.829180   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.829190   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 18:30:20.829204   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.829204   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.829359   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.829529   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.830508   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:20.831612   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:20.832084   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0906 18:30:20.832184   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.832507   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.832584   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.832609   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.832819   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.833011   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.833121   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0906 18:30:20.833414   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.833429   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.833492   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.833793   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:20.833803   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.834052   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.834069   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.834073   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.834438   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.834545   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.834607   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0906 18:30:20.834701   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.835272   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:20.835310   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:20.835477   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.835972   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.835991   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.836281   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.836345   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.836562   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.836604   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:20.837923   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:20.837946   13921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:20.837963   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.837923   13921 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:20.839250   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.839291   13921 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:20.839304   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:20.839318   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.840641   13921 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:20.841282   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.841783   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.841808   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.841870   13921 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:20.841894   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:20.841916   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.841929   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.842112   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.842637   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.842812   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.842859   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.843390   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.843422   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.843692   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.843885   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.844030   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.844237   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.847076   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.847119   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0906 18:30:20.847379   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.847402   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.847562   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.847712   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.847856   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.847967   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.855669   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37083
	I0906 18:30:20.859618   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33773
	I0906 18:30:20.863778   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.863789   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.863790   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.864355   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.864384   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.864483   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.864490   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.864501   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.864505   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.864762   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.864816   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.864909   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.864961   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.865020   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.865133   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	W0906 18:30:20.865242   13921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33910->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:20.865269   13921 retry.go:31] will retry after 192.098309ms: ssh: handshake failed: read tcp 192.168.39.1:33910->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:20.867070   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.867444   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.867733   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.869096   13921 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:20.869119   13921 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:20.869097   13921 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:20.870270   13921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:20.870284   13921 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:20.870592   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.871721   13921 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:20.872499   13921 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:20.873329   13921 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:20.873350   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:20.873367   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.873562   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.873951   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.873977   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.874083   13921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:20.874097   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:20.874111   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.874140   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.874322   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.874497   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.874637   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.877201   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.877281   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.877666   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.877695   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.878194   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.878215   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.878271   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.878451   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.878497   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.878685   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.878711   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.878853   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.878854   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:20.879257   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	W0906 18:30:20.879641   13921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33930->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:20.879657   13921 retry.go:31] will retry after 179.862016ms: ssh: handshake failed: read tcp 192.168.39.1:33930->192.168.39.227:22: read: connection reset by peer
	W0906 18:30:20.879903   13921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33944->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:20.879931   13921 retry.go:31] will retry after 223.472053ms: ssh: handshake failed: read tcp 192.168.39.1:33944->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:20.881948   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42867
	I0906 18:30:20.882272   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:20.882681   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:20.882697   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:20.882941   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:20.883102   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:20.884379   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:20.885898   13921 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:20.886965   13921 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:20.886979   13921 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:20.886990   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:20.889631   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.890011   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:20.890040   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:20.890170   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:20.890328   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:20.890450   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:20.890583   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	W0906 18:30:20.893623   13921 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33948->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:20.893647   13921 retry.go:31] will retry after 171.024976ms: ssh: handshake failed: read tcp 192.168.39.1:33948->192.168.39.227:22: read: connection reset by peer
	I0906 18:30:21.119798   13921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:21.120059   13921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:21.161239   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:21.222899   13921 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:21.222924   13921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:21.231367   13921 node_ready.go:35] waiting up to 6m0s for node "addons-009491" to be "Ready" ...
	I0906 18:30:21.234954   13921 node_ready.go:49] node "addons-009491" has status "Ready":"True"
	I0906 18:30:21.234973   13921 node_ready.go:38] duration metric: took 3.578409ms for node "addons-009491" to be "Ready" ...
	I0906 18:30:21.234982   13921 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:21.244403   13921 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:21.259759   13921 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:21.259787   13921 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:21.371605   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:21.371631   13921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:21.372942   13921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:21.372959   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:21.386574   13921 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 18:30:21.386593   13921 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 18:30:21.400633   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:21.406807   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:21.429836   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:21.448030   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:21.474049   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:21.481122   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:21.496532   13921 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:21.496554   13921 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 18:30:21.504535   13921 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:21.504554   13921 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:21.510867   13921 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:21.510884   13921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:21.512345   13921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:21.512359   13921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:21.569329   13921 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:21.569350   13921 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:21.595520   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:21.605856   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:21.605881   13921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:21.825873   13921 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:21.825896   13921 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:21.830580   13921 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:21.830597   13921 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:21.862653   13921 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:21.862679   13921 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:21.928749   13921 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:21.928780   13921 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:21.952977   13921 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:21.952999   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:21.986108   13921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:21.986133   13921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:22.092263   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:22.136946   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:22.136973   13921 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:22.184927   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:22.184950   13921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:22.481142   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:22.511206   13921 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:22.511243   13921 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:22.519840   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:22.630258   13921 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:22.630287   13921 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:22.633236   13921 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:22.633255   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:23.036288   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:23.036311   13921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:23.128503   13921 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:23.128532   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:23.249256   13921 pod_ready.go:103] pod "etcd-addons-009491" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:23.501474   13921 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:23.501503   13921 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:23.519988   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:23.647203   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:23.693797   13921 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:23.693822   13921 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:23.750445   13921 pod_ready.go:93] pod "etcd-addons-009491" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:23.750464   13921 pod_ready.go:82] duration metric: took 2.506037847s for pod "etcd-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:23.750472   13921 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:23.796909   13921 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:23.796943   13921 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:23.948106   13921 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:23.948134   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:24.007607   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:24.069162   13921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:24.069194   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:24.238139   13921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:24.238159   13921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:24.649217   13921 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.529126969s)
	I0906 18:30:24.649250   13921 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:24.796220   13921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:24.796243   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:24.965728   13921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:24.965748   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:25.161476   13921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-009491" context rescaled to 1 replicas
	I0906 18:30:25.261850   13921 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:25.261870   13921 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:25.682877   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:25.772641   13921 pod_ready.go:103] pod "kube-apiserver-addons-009491" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:26.293679   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.893016904s)
	I0906 18:30:26.293726   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.293741   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.293791   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.132520127s)
	I0906 18:30:26.293809   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.88697544s)
	I0906 18:30:26.293831   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.293836   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.293843   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.293846   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.294013   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.294036   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.294045   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.294053   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.294268   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:26.294298   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.294311   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.294314   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.294326   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.294333   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.294335   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.294342   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.294352   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.294375   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:26.294402   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.294421   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.294554   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.294579   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.294657   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:26.294703   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.294712   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.334299   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:26.334322   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:26.334605   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:26.334656   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:26.334630   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:27.818689   13921 pod_ready.go:93] pod "kube-apiserver-addons-009491" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:27.818720   13921 pod_ready.go:82] duration metric: took 4.068239612s for pod "kube-apiserver-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:27.818734   13921 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:27.863617   13921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:27.863653   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:27.866766   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:27.867202   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:27.867231   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:27.867387   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:27.867618   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:27.867779   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:27.867903   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:28.467871   13921 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:28.692995   13921 addons.go:234] Setting addon gcp-auth=true in "addons-009491"
	I0906 18:30:28.693062   13921 host.go:66] Checking if "addons-009491" exists ...
	I0906 18:30:28.693570   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:28.693607   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:28.709503   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I0906 18:30:28.709932   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:28.710406   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:28.710442   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:28.710874   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:28.711511   13921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:30:28.711555   13921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:28.726254   13921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
	I0906 18:30:28.726680   13921 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:28.727165   13921 main.go:141] libmachine: Using API Version  1
	I0906 18:30:28.727194   13921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:28.727666   13921 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:28.727841   13921 main.go:141] libmachine: (addons-009491) Calling .GetState
	I0906 18:30:28.729502   13921 main.go:141] libmachine: (addons-009491) Calling .DriverName
	I0906 18:30:28.729716   13921 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:28.729738   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHHostname
	I0906 18:30:28.732953   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:28.733329   13921 main.go:141] libmachine: (addons-009491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:6e:9e", ip: ""} in network mk-addons-009491: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:43 +0000 UTC Type:0 Mac:52:54:00:47:6e:9e Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:addons-009491 Clientid:01:52:54:00:47:6e:9e}
	I0906 18:30:28.733358   13921 main.go:141] libmachine: (addons-009491) DBG | domain addons-009491 has defined IP address 192.168.39.227 and MAC address 52:54:00:47:6e:9e in network mk-addons-009491
	I0906 18:30:28.733499   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHPort
	I0906 18:30:28.733682   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHKeyPath
	I0906 18:30:28.733856   13921 main.go:141] libmachine: (addons-009491) Calling .GetSSHUsername
	I0906 18:30:28.734019   13921 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/addons-009491/id_rsa Username:docker}
	I0906 18:30:29.889890   13921 pod_ready.go:103] pod "kube-controller-manager-addons-009491" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:30.919625   13921 pod_ready.go:93] pod "kube-controller-manager-addons-009491" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:30.919646   13921 pod_ready.go:82] duration metric: took 3.100904903s for pod "kube-controller-manager-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:30.919656   13921 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:30.973416   13921 pod_ready.go:93] pod "kube-scheduler-addons-009491" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:30.973435   13921 pod_ready.go:82] duration metric: took 53.772438ms for pod "kube-scheduler-addons-009491" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:30.973442   13921 pod_ready.go:39] duration metric: took 9.738449733s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:30.973458   13921 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:30:30.973505   13921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:33.997995   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (12.549937323s)
	I0906 18:30:33.998048   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998061   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998133   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.524054212s)
	I0906 18:30:33.998180   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998188   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.517039636s)
	I0906 18:30:33.998222   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998237   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998229   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.402683659s)
	I0906 18:30:33.998194   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998306   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.906019064s)
	I0906 18:30:33.998330   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998335   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.998349   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.998358   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998366   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998374   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.517200429s)
	I0906 18:30:33.998390   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998340   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998398   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998459   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.478593478s)
	I0906 18:30:33.998480   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998489   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998627   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.998633   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.478611412s)
	W0906 18:30:33.998657   13921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:33.998676   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.998695   13921 retry.go:31] will retry after 154.25371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:33.998701   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.998718   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.998723   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.998728   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998744   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.998746   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998750   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.351520991s)
	I0906 18:30:33.998752   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.998762   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998768   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998768   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998777   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998707   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.998801   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.998809   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998816   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998868   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.991231614s)
	I0906 18:30:33.998888   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998896   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998263   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.998915   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.998971   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.999007   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.999014   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.999022   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.999030   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.999074   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.999094   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.999101   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.999108   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.999112   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.999115   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.999188   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:33.999205   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.999212   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:33.999221   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:33.999228   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:33.999275   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:33.999286   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.000275   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.570410633s)
	I0906 18:30:34.000297   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.000308   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.000364   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.000384   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.000390   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.000397   13921 addons.go:475] Verifying addon ingress=true in "addons-009491"
	I0906 18:30:34.000817   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.000845   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.000852   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.000860   13921 addons.go:475] Verifying addon registry=true in "addons-009491"
	I0906 18:30:34.000862   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.000933   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.000944   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.000954   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.000963   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.001089   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.001122   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.001130   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.001138   13921 addons.go:475] Verifying addon metrics-server=true in "addons-009491"
	I0906 18:30:34.002307   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.002334   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.002340   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.002348   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.002354   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.002400   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.002418   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.002425   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.002432   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.002438   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.002476   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.002494   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.002500   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.003271   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.003310   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.003320   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.003483   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.003641   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.000905   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.003841   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.003865   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.003871   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.003909   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.003934   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.003963   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.003918   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.003969   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.003990   13921 out.go:177] * Verifying ingress addon...
	I0906 18:30:34.004013   13921 out.go:177] * Verifying registry addon...
	I0906 18:30:34.007122   13921 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-009491 service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:34.007944   13921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:34.008004   13921 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 18:30:34.055072   13921 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 18:30:34.055090   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:34.055249   13921 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:34.055267   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.111280   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.111300   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.111612   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.111628   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.153567   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:34.551954   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.552207   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:34.639948   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.957022765s)
	I0906 18:30:34.639998   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.640009   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.639964   13921 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.910231181s)
	I0906 18:30:34.640033   13921 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.666505733s)
	I0906 18:30:34.640060   13921 api_server.go:72] duration metric: took 13.946506784s to wait for apiserver process to appear ...
	I0906 18:30:34.640072   13921 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:30:34.640093   13921 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0906 18:30:34.640259   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:34.640313   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.640322   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.640331   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:34.640343   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:34.640542   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:34.640554   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:34.640563   13921 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-009491"
	I0906 18:30:34.641771   13921 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:34.641771   13921 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:34.643062   13921 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:34.643763   13921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:34.644292   13921 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:34.644308   13921 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:34.668283   13921 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0906 18:30:34.670317   13921 api_server.go:141] control plane version: v1.31.0
	I0906 18:30:34.670338   13921 api_server.go:131] duration metric: took 30.260305ms to wait for apiserver health ...
	I0906 18:30:34.670346   13921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:30:34.690385   13921 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:34.690412   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.696906   13921 system_pods.go:59] 19 kube-system pods found
	I0906 18:30:34.696937   13921 system_pods.go:61] "coredns-6f6b679f8f-s4r7l" [2f91529b-80cf-4ed0-b0a1-654a3a8fae47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 18:30:34.696944   13921 system_pods.go:61] "coredns-6f6b679f8f-w9sz8" [254a315e-3939-4ce1-b748-43087109aa32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 18:30:34.696952   13921 system_pods.go:61] "csi-hostpath-attacher-0" [8a0ead57-d84b-463e-b228-8440793f9eb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:34.696956   13921 system_pods.go:61] "csi-hostpath-resizer-0" [63b24ea9-a114-429b-a12b-9148a496b7c9] Pending
	I0906 18:30:34.696962   13921 system_pods.go:61] "csi-hostpathplugin-9vvbl" [e64b50f1-9be2-4863-bfeb-dc9421ed2588] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:34.696966   13921 system_pods.go:61] "etcd-addons-009491" [0fcebd87-1c8f-40e3-8e83-94d2beaa5bc0] Running
	I0906 18:30:34.696969   13921 system_pods.go:61] "kube-apiserver-addons-009491" [16bc329c-4b96-4868-9cfc-b66b0b524b5e] Running
	I0906 18:30:34.696973   13921 system_pods.go:61] "kube-controller-manager-addons-009491" [f387cb93-09b8-4639-be46-c949ed423c9d] Running
	I0906 18:30:34.696981   13921 system_pods.go:61] "kube-ingress-dns-minikube" [daf029af-ec9a-4b45-b191-7fa7df3e88e7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 18:30:34.696985   13921 system_pods.go:61] "kube-proxy-kkz8k" [0defc0a8-2713-45d9-baf2-85d3c5064566] Running
	I0906 18:30:34.696990   13921 system_pods.go:61] "kube-scheduler-addons-009491" [320e0009-6c47-4b93-aa8f-6cd7f1bae6ae] Running
	I0906 18:30:34.697001   13921 system_pods.go:61] "metrics-server-84c5f94fbc-nwfbc" [4d7b0eb8-eecc-4249-ae40-724b201c811f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:34.697012   13921 system_pods.go:61] "nvidia-device-plugin-daemonset-dq95c" [6be1300d-9afe-427c-8d48-8f641e89139a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0906 18:30:34.697026   13921 system_pods.go:61] "registry-6fb4cdfc84-csqdb" [110dd636-029b-4474-abd2-864399927b41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:34.697036   13921 system_pods.go:61] "registry-proxy-tpzll" [4c990573-82b7-4c3e-aa76-d699dd353669] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:34.697045   13921 system_pods.go:61] "snapshot-controller-56fcc65765-q4fmt" [419a4183-bc37-49a9-8064-b7717a02bdba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:34.697053   13921 system_pods.go:61] "snapshot-controller-56fcc65765-vjwff" [e9e563eb-d100-4df1-8b94-fce4b5512521] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:34.697061   13921 system_pods.go:61] "storage-provisioner" [a27e1cf8-ad42-4f76-9dca-1197d98493f9] Running
	I0906 18:30:34.697069   13921 system_pods.go:61] "tiller-deploy-b48cc5f79-ht46b" [cac5bee2-4204-4335-97e5-73d41d66719f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 18:30:34.697079   13921 system_pods.go:74] duration metric: took 26.72647ms to wait for pod list to return data ...
	I0906 18:30:34.697097   13921 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:30:34.743130   13921 default_sa.go:45] found service account: "default"
	I0906 18:30:34.743154   13921 default_sa.go:55] duration metric: took 46.048077ms for default service account to be created ...
	I0906 18:30:34.743163   13921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:30:34.752756   13921 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:34.752776   13921 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:34.797505   13921 system_pods.go:86] 19 kube-system pods found
	I0906 18:30:34.797538   13921 system_pods.go:89] "coredns-6f6b679f8f-s4r7l" [2f91529b-80cf-4ed0-b0a1-654a3a8fae47] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 18:30:34.797548   13921 system_pods.go:89] "coredns-6f6b679f8f-w9sz8" [254a315e-3939-4ce1-b748-43087109aa32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 18:30:34.797559   13921 system_pods.go:89] "csi-hostpath-attacher-0" [8a0ead57-d84b-463e-b228-8440793f9eb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:34.797567   13921 system_pods.go:89] "csi-hostpath-resizer-0" [63b24ea9-a114-429b-a12b-9148a496b7c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:34.797574   13921 system_pods.go:89] "csi-hostpathplugin-9vvbl" [e64b50f1-9be2-4863-bfeb-dc9421ed2588] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:34.797580   13921 system_pods.go:89] "etcd-addons-009491" [0fcebd87-1c8f-40e3-8e83-94d2beaa5bc0] Running
	I0906 18:30:34.797586   13921 system_pods.go:89] "kube-apiserver-addons-009491" [16bc329c-4b96-4868-9cfc-b66b0b524b5e] Running
	I0906 18:30:34.797591   13921 system_pods.go:89] "kube-controller-manager-addons-009491" [f387cb93-09b8-4639-be46-c949ed423c9d] Running
	I0906 18:30:34.797600   13921 system_pods.go:89] "kube-ingress-dns-minikube" [daf029af-ec9a-4b45-b191-7fa7df3e88e7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0906 18:30:34.797605   13921 system_pods.go:89] "kube-proxy-kkz8k" [0defc0a8-2713-45d9-baf2-85d3c5064566] Running
	I0906 18:30:34.797611   13921 system_pods.go:89] "kube-scheduler-addons-009491" [320e0009-6c47-4b93-aa8f-6cd7f1bae6ae] Running
	I0906 18:30:34.797622   13921 system_pods.go:89] "metrics-server-84c5f94fbc-nwfbc" [4d7b0eb8-eecc-4249-ae40-724b201c811f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:34.797638   13921 system_pods.go:89] "nvidia-device-plugin-daemonset-dq95c" [6be1300d-9afe-427c-8d48-8f641e89139a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0906 18:30:34.797651   13921 system_pods.go:89] "registry-6fb4cdfc84-csqdb" [110dd636-029b-4474-abd2-864399927b41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:34.797659   13921 system_pods.go:89] "registry-proxy-tpzll" [4c990573-82b7-4c3e-aa76-d699dd353669] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:34.797672   13921 system_pods.go:89] "snapshot-controller-56fcc65765-q4fmt" [419a4183-bc37-49a9-8064-b7717a02bdba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:34.797683   13921 system_pods.go:89] "snapshot-controller-56fcc65765-vjwff" [e9e563eb-d100-4df1-8b94-fce4b5512521] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:34.797691   13921 system_pods.go:89] "storage-provisioner" [a27e1cf8-ad42-4f76-9dca-1197d98493f9] Running
	I0906 18:30:34.797698   13921 system_pods.go:89] "tiller-deploy-b48cc5f79-ht46b" [cac5bee2-4204-4335-97e5-73d41d66719f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0906 18:30:34.797707   13921 system_pods.go:126] duration metric: took 54.538038ms to wait for k8s-apps to be running ...
	I0906 18:30:34.797721   13921 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:30:34.797771   13921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:30:34.869312   13921 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:34.869334   13921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:35.015480   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.016020   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.056676   13921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:35.149188   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.513157   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.514667   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.648150   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.027063   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.027526   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.149212   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.316552   13921 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.518753342s)
	I0906 18:30:36.316606   13921 system_svc.go:56] duration metric: took 1.518868472s WaitForService to wait for kubelet
	I0906 18:30:36.316619   13921 kubeadm.go:582] duration metric: took 15.623064811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:30:36.316642   13921 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:30:36.316551   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.162937457s)
	I0906 18:30:36.316703   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:36.316726   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:36.317046   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:36.317073   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:36.317088   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:36.317118   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:36.317132   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:36.317367   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:36.317381   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:36.320204   13921 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:30:36.320226   13921 node_conditions.go:123] node cpu capacity is 2
	I0906 18:30:36.320236   13921 node_conditions.go:105] duration metric: took 3.587638ms to run NodePressure ...
	I0906 18:30:36.320246   13921 start.go:241] waiting for startup goroutines ...
	I0906 18:30:36.526879   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.529877   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.659010   13921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.602281562s)
	I0906 18:30:36.659059   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:36.659072   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:36.659369   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:36.659412   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:36.659402   13921 main.go:141] libmachine: (addons-009491) DBG | Closing plugin on server side
	I0906 18:30:36.659430   13921 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:36.659439   13921 main.go:141] libmachine: (addons-009491) Calling .Close
	I0906 18:30:36.659718   13921 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:36.659761   13921 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:36.661430   13921 addons.go:475] Verifying addon gcp-auth=true in "addons-009491"
	I0906 18:30:36.663919   13921 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:36.666473   13921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:36.670342   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.674055   13921 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:37.013961   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.014602   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.148673   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.517759   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.518105   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.651320   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.012535   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.014239   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.149078   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.516965   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.517287   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.648216   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.013915   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.014667   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.149652   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.511900   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.515652   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.648900   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.012917   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.013321   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.147686   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.512979   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.513750   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.996939   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.015398   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.016258   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.147899   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.511840   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.512040   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.648269   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.013700   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.013879   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.148353   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.513078   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.513933   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.648365   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.012815   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.013055   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.149316   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.512505   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.512526   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.762193   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.012846   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.013553   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.147816   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.512690   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.513011   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.648601   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.013333   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.013745   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.148452   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.513089   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.513700   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.647621   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.356135   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.356355   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.356402   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.512620   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.514368   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.648580   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.017892   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.018683   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.156825   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.512593   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.512832   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.648427   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.012656   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.012775   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.149430   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.513700   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.514074   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.648515   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.013682   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.013904   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.148396   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.513728   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.513888   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.648540   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.012774   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.012947   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.148477   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.511678   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.513059   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.648557   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.037298   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.038076   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.149770   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.513028   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.513326   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.649722   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.013259   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.015979   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.149312   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.643028   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.643363   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.647737   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.012305   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.013011   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.148459   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.512344   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.514136   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.648190   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.012944   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.013664   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.147876   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.694104   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.694458   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.695654   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.012752   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.013207   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.154985   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.514100   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.514604   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.650783   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.012312   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.012716   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.148283   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.512852   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.513501   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.647963   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.013589   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.014028   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.151345   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.512789   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.512987   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.648987   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.012351   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.012770   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.148560   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.512600   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.513172   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.648721   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.013222   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.013571   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.148822   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.512161   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.512549   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.647869   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.012849   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.013116   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.148300   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.512996   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.513383   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.883893   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.016131   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.016839   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.148772   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.513018   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.513533   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.647771   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.012108   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.013317   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.148326   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.513086   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.514629   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.649192   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.013954   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.014362   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.149087   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.513021   13921 kapi.go:107] duration metric: took 29.505073445s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:03.513722   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.649057   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.011725   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.148378   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.513029   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.649086   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.012996   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.148185   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.693609   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.693691   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.013331   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.148978   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.515683   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.648285   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.011627   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.147747   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.570985   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.677497   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.012999   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.148027   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.513605   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.648265   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.013101   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.148302   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.514005   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.649410   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.012313   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.150668   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.511553   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.651363   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.012129   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.148214   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.512853   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.648522   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.012537   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.148351   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.512629   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.647892   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.012245   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.148944   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.512868   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.649065   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.011940   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.352814   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.513298   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.648402   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.011943   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.148273   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.513483   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.648704   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.012823   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.148677   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.512010   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.648808   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.012706   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.148219   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.512526   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.647462   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.014164   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.147882   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.512010   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.647511   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.012032   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.149609   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.810172   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.813160   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.016539   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.147976   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.511975   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.648094   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.012769   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.152150   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.514078   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.649245   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.016277   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.148150   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.511789   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.647996   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.012741   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.148591   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.512973   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.647521   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.012436   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.147747   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.512590   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.647902   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.012684   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.147891   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.511726   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.648141   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.013620   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.149683   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.511339   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.649110   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.013345   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.148456   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.512663   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.647895   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:28.012189   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.149154   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:28.513035   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.648745   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:29.012801   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.148094   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:29.513384   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.684368   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:30.012076   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.148314   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:30.511721   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.648287   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:31.013255   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.149167   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:31.511905   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.648811   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:32.022059   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.150204   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:32.514617   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.650982   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:33.012983   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.148870   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:33.514752   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.652262   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:34.018117   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.149990   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:34.512144   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.649169   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:35.011703   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.153567   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:35.521978   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.650908   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:36.012924   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.148461   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:36.511945   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.647918   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:37.013591   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.148641   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:37.523317   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.651361   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:38.016968   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.147833   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:38.515094   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.654031   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:39.012958   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.149108   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:39.512471   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.648935   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:40.013607   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:40.149003   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:40.512470   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:40.648027   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:41.011998   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:41.148251   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:41.512999   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:41.939916   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:42.011634   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:42.148647   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:42.511955   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:42.648487   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:43.012767   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:43.147928   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:43.512736   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:43.648719   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:44.013444   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:44.147639   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:44.511802   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:44.648636   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:45.405646   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:45.405689   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:45.512051   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:45.648812   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:46.012658   13921 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:46.148856   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:46.520464   13921 kapi.go:107] duration metric: took 1m12.51244988s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 18:31:46.652473   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:47.148877   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:47.677700   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:48.153340   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:48.649732   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:49.148113   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:49.648485   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:50.148727   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:50.649208   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:51.150186   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:51.648796   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:52.182532   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:52.647569   13921 kapi.go:107] duration metric: took 1m18.00380137s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:31:58.674607   13921 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:31:58.674634   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:59.170125   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:59.670659   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:00.169910   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:00.670119   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:01.170198   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:01.671834   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:02.170723   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:02.670346   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:03.170817   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:03.670795   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:04.170663   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:04.670291   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:05.170979   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:05.671273   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:06.170819   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:06.670407   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:07.170059   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:07.671177   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:08.171224   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:08.670942   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:09.170582   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:09.669874   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:10.171225   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:10.670938   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:11.170385   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:11.669522   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:12.170278   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:12.670120   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:13.170535   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:13.671286   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.171558   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.670228   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.171255   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.671401   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.172515   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.670067   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.170781   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.670605   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.169588   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.671681   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.170617   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.670533   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.169925   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.670121   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.172229   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.670898   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.176535   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.670312   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.170569   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.671356   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.170742   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.670763   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.169961   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.670821   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.170664   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.670196   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.170546   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.670224   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.170911   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.670744   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.170167   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.670910   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.170183   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.671102   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.170225   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.670760   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.171655   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.671301   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.171105   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.670436   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.170038   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.670655   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.170327   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.670318   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.169904   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.670374   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.170334   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.670974   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.170656   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.671141   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.170657   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.669289   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.170379   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.671290   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.171128   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.670712   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.170865   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.670246   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.171192   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.670433   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.170319   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.670939   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.170091   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.670770   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.170188   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.671244   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.170445   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.670431   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.170505   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.671226   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.170667   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.670400   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.169812   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.670301   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.169503   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.669808   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.169863   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.670585   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.170078   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.670589   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.169944   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.670549   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.170743   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.670895   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.170348   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.671197   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.170528   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.670254   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.171346   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.671377   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.170160   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.671092   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.170781   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.672443   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.169750   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.669997   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.170339   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.670147   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.170678   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.670798   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.170081   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.671104   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.170116   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.671018   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.169903   13921 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.670219   13921 kapi.go:107] duration metric: took 2m30.003746536s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:33:06.671513   13921 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-009491 cluster.
	I0906 18:33:06.672593   13921 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:33:06.673534   13921 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:33:06.674559   13921 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, metrics-server, nvidia-device-plugin, volcano, ingress-dns, inspektor-gadget, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0906 18:33:06.676236   13921 addons.go:510] duration metric: took 2m45.9826624s for enable addons: enabled=[storage-provisioner cloud-spanner default-storageclass metrics-server nvidia-device-plugin volcano ingress-dns inspektor-gadget helm-tiller yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0906 18:33:06.676270   13921 start.go:246] waiting for cluster config update ...
	I0906 18:33:06.676287   13921 start.go:255] writing updated cluster config ...
	I0906 18:33:06.676521   13921 ssh_runner.go:195] Run: rm -f paused
	I0906 18:33:06.732017   13921 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:33:06.733355   13921 out.go:177] * Done! kubectl is now configured to use "addons-009491" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 06 18:42:55 addons-009491 dockerd[1192]: time="2024-09-06T18:42:55.659580042Z" level=info msg="shim disconnected" id=cbbdab32251eac18ba7298707d650e5239710cb2cbc935f3a08244221f14f4b6 namespace=moby
	Sep 06 18:42:55 addons-009491 dockerd[1192]: time="2024-09-06T18:42:55.659651925Z" level=warning msg="cleaning up after shim disconnected" id=cbbdab32251eac18ba7298707d650e5239710cb2cbc935f3a08244221f14f4b6 namespace=moby
	Sep 06 18:42:55 addons-009491 dockerd[1192]: time="2024-09-06T18:42:55.659662852Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:42:55 addons-009491 dockerd[1186]: time="2024-09-06T18:42:55.660704645Z" level=info msg="ignoring event" container=cbbdab32251eac18ba7298707d650e5239710cb2cbc935f3a08244221f14f4b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-009491 dockerd[1186]: time="2024-09-06T18:43:02.111621935Z" level=info msg="ignoring event" container=02d805dbf385a005834bd0b51cdb46560a35fdb2556a833d25cad0fd2631b2db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.112362038Z" level=info msg="shim disconnected" id=02d805dbf385a005834bd0b51cdb46560a35fdb2556a833d25cad0fd2631b2db namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.112621884Z" level=warning msg="cleaning up after shim disconnected" id=02d805dbf385a005834bd0b51cdb46560a35fdb2556a833d25cad0fd2631b2db namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.112632461Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.623226365Z" level=info msg="shim disconnected" id=ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489 namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.623737737Z" level=warning msg="cleaning up after shim disconnected" id=ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489 namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.623886836Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1186]: time="2024-09-06T18:43:02.625388166Z" level=info msg="ignoring event" container=ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-009491 dockerd[1186]: time="2024-09-06T18:43:02.659859152Z" level=info msg="ignoring event" container=84c9fd42da2b0b97e089ac8bf910e51edef2bdb6336b72294e1341c108b8844d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.660695380Z" level=info msg="shim disconnected" id=84c9fd42da2b0b97e089ac8bf910e51edef2bdb6336b72294e1341c108b8844d namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.660970338Z" level=warning msg="cleaning up after shim disconnected" id=84c9fd42da2b0b97e089ac8bf910e51edef2bdb6336b72294e1341c108b8844d namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.661054186Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1186]: time="2024-09-06T18:43:02.803865368Z" level=info msg="ignoring event" container=25546ce658a5346babadb0b02eba011cd154244069253ab962ed1450cd229fc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.805868656Z" level=info msg="shim disconnected" id=25546ce658a5346babadb0b02eba011cd154244069253ab962ed1450cd229fc7 namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.806072584Z" level=warning msg="cleaning up after shim disconnected" id=25546ce658a5346babadb0b02eba011cd154244069253ab962ed1450cd229fc7 namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.806153134Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.846153360Z" level=warning msg="cleanup warnings time=\"2024-09-06T18:43:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1186]: time="2024-09-06T18:43:02.936748212Z" level=info msg="ignoring event" container=21612f058148bc7151c47553f40ee67654cc7ce6aafc7754efa68dc3694bce72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.937014863Z" level=info msg="shim disconnected" id=21612f058148bc7151c47553f40ee67654cc7ce6aafc7754efa68dc3694bce72 namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.937097354Z" level=warning msg="cleaning up after shim disconnected" id=21612f058148bc7151c47553f40ee67654cc7ce6aafc7754efa68dc3694bce72 namespace=moby
	Sep 06 18:43:02 addons-009491 dockerd[1192]: time="2024-09-06T18:43:02.937108688Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc5c3850b1c94       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  11 seconds ago      Running             hello-world-app           0                   1f0d7cd578171       hello-world-app-55bf9c44b4-x8h87
	d6b915ff95f30       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                19 seconds ago      Running             nginx                     0                   c0641df57bd64       nginx
	5783769753e63       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        31 seconds ago      Running             headlamp                  0                   2307ed12f7133       headlamp-57fb76fcdb-cqpqz
	c22cd4e88948e       a416a98b71e22                                                                                                                51 seconds ago      Exited              helper-pod                0                   3a7e7912efa2e       helper-pod-delete-pvc-a78db530-dc97-4b7f-a847-310a42db2e7a
	d4cabb69d44cc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   dbf3057a12427       gcp-auth-89d5ffd79-2gcq2
	94a6f7cff7f12       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   b9d78f21f3240       ingress-nginx-admission-patch-9lh6v
	35075e5d1d5d8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   18a11f88d3ea3       ingress-nginx-admission-create-vl8th
	593f3b8b68955       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   4c0bfefc6afb5       storage-provisioner
	90a9a485d1cf5       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   9f9d8e47a646c       coredns-6f6b679f8f-s4r7l
	2f44f6d895cba       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   cf52f5d1ad76a       kube-proxy-kkz8k
	d364e3b405137       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   9d8ab289e7f49       kube-apiserver-addons-009491
	3f0384ff7ebdd       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   1491fe9d517d5       kube-scheduler-addons-009491
	43711b1c2a183       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   7fee7d7c3db73       kube-controller-manager-addons-009491
	0738bc72c37b7       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   5d91b0e411b46       etcd-addons-009491
	
	
	==> coredns [90a9a485d1cf] <==
	[INFO] 10.244.0.22:49611 - 10279 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059489s
	[INFO] 10.244.0.22:37834 - 43325 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091713s
	[INFO] 10.244.0.22:49611 - 29066 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000191507s
	[INFO] 10.244.0.22:49611 - 62770 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00010295s
	[INFO] 10.244.0.22:37834 - 11998 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000200624s
	[INFO] 10.244.0.22:49611 - 55030 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133905s
	[INFO] 10.244.0.22:49611 - 36055 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101978s
	[INFO] 10.244.0.22:37834 - 18811 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000303554s
	[INFO] 10.244.0.22:49611 - 19714 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000489127s
	[INFO] 10.244.0.22:37834 - 64447 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065874s
	[INFO] 10.244.0.22:37834 - 35947 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046805s
	[INFO] 10.244.0.22:45469 - 15065 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000130631s
	[INFO] 10.244.0.22:45469 - 60042 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076528s
	[INFO] 10.244.0.22:45469 - 41104 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038476s
	[INFO] 10.244.0.22:45469 - 24036 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059752s
	[INFO] 10.244.0.22:45469 - 46751 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065941s
	[INFO] 10.244.0.22:45469 - 2432 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000114052s
	[INFO] 10.244.0.22:45469 - 27377 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078233s
	[INFO] 10.244.0.22:39847 - 31856 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069564s
	[INFO] 10.244.0.22:39847 - 63249 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000154159s
	[INFO] 10.244.0.22:39847 - 57278 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004191s
	[INFO] 10.244.0.22:39847 - 29544 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048069s
	[INFO] 10.244.0.22:39847 - 1749 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106217s
	[INFO] 10.244.0.22:39847 - 49577 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063214s
	[INFO] 10.244.0.22:39847 - 98 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044081s
	
	
	==> describe nodes <==
	Name:               addons-009491
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-009491
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-009491
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-009491
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-009491
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:43:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    addons-009491
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 f86262621ba54aee92f2f24f1bb79125
	  System UUID:                f8626262-1ba5-4aee-92f2-f24f1bb79125
	  Boot ID:                    6046d5d4-5665-4443-96d4-6ef4795b6f59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-x8h87         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  gcp-auth                    gcp-auth-89d5ffd79-2gcq2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-57fb76fcdb-cqpqz                0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 coredns-6f6b679f8f-s4r7l                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-009491                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-009491             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-009491    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kkz8k                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-009491             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-009491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-009491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-009491 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-009491 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-009491 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-009491 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-009491 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-009491 event: Registered Node addons-009491 in Controller
	
	
	==> dmesg <==
	[  +7.165187] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.111005] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.667120] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.363918] kauditd_printk_skb: 23 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.558516] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:33] kauditd_printk_skb: 40 callbacks suppressed
	[ +16.533889] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.115773] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.005955] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.492966] kauditd_printk_skb: 20 callbacks suppressed
	[Sep 6 18:34] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:41] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:42] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.007892] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.154068] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.041872] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.207380] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.935344] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.483223] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.552527] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.108659] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.395989] kauditd_printk_skb: 19 callbacks suppressed
	[Sep 6 18:43] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0738bc72c37b] <==
	{"level":"info","ts":"2024-09-06T18:31:41.896888Z","caller":"traceutil/trace.go:171","msg":"trace[875397337] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1269; }","duration":"301.168149ms","start":"2024-09-06T18:31:41.595711Z","end":"2024-09-06T18:31:41.896879Z","steps":["trace[875397337] 'agreement among raft nodes before linearized reading'  (duration: 301.108714ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:41.896909Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:31:41.595685Z","time spent":"301.21951ms","remote":"127.0.0.1:58670","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":31,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-09-06T18:31:41.897056Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.12384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:41.897072Z","caller":"traceutil/trace.go:171","msg":"trace[385943751] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"290.140281ms","start":"2024-09-06T18:31:41.606926Z","end":"2024-09-06T18:31:41.897067Z","steps":["trace[385943751] 'agreement among raft nodes before linearized reading'  (duration: 290.113992ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:41.897225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.869423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:41.897240Z","caller":"traceutil/trace.go:171","msg":"trace[1697044224] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"267.885228ms","start":"2024-09-06T18:31:41.629350Z","end":"2024-09-06T18:31:41.897235Z","steps":["trace[1697044224] 'agreement among raft nodes before linearized reading'  (duration: 267.860653ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:45.362075Z","caller":"traceutil/trace.go:171","msg":"trace[1913242404] linearizableReadLoop","detail":"{readStateIndex:1311; appliedIndex:1310; }","duration":"391.167638ms","start":"2024-09-06T18:31:44.970896Z","end":"2024-09-06T18:31:45.362063Z","steps":["trace[1913242404] 'read index received'  (duration: 391.044086ms)","trace[1913242404] 'applied index is now lower than readState.Index'  (duration: 122.983µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:31:45.362106Z","caller":"traceutil/trace.go:171","msg":"trace[274019111] transaction","detail":"{read_only:false; response_revision:1273; number_of_response:1; }","duration":"399.348723ms","start":"2024-09-06T18:31:44.962739Z","end":"2024-09-06T18:31:45.362088Z","steps":["trace[274019111] 'process raft request'  (duration: 399.227645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:45.362174Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.263863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:45.362192Z","caller":"traceutil/trace.go:171","msg":"trace[1131596095] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1273; }","duration":"391.294888ms","start":"2024-09-06T18:31:44.970892Z","end":"2024-09-06T18:31:45.362187Z","steps":["trace[1131596095] 'agreement among raft nodes before linearized reading'  (duration: 391.23465ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:45.362202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:31:44.962717Z","time spent":"399.427699ms","remote":"127.0.0.1:58390","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":764,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-kvrz6.17f2bb421bf6aef6\" mod_revision:1272 > success:<request_put:<key:\"/registry/events/gadget/gadget-kvrz6.17f2bb421bf6aef6\" value_size:693 lease:3025453340446766952 >> failure:<request_range:<key:\"/registry/events/gadget/gadget-kvrz6.17f2bb421bf6aef6\" > >"}
	{"level":"warn","ts":"2024-09-06T18:31:45.362229Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.617519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:45.362242Z","caller":"traceutil/trace.go:171","msg":"trace[1921648097] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1273; }","duration":"254.628739ms","start":"2024-09-06T18:31:45.107608Z","end":"2024-09-06T18:31:45.362237Z","steps":["trace[1921648097] 'agreement among raft nodes before linearized reading'  (duration: 254.612543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:45.362208Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:31:44.970866Z","time spent":"391.338673ms","remote":"127.0.0.1:58488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-06T18:31:45.362462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.629246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:45.362482Z","caller":"traceutil/trace.go:171","msg":"trace[490699639] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1273; }","duration":"232.649014ms","start":"2024-09-06T18:31:45.129826Z","end":"2024-09-06T18:31:45.362475Z","steps":["trace[490699639] 'agreement among raft nodes before linearized reading'  (duration: 232.614692ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:52.134365Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.048309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-09-06T18:31:52.134909Z","caller":"traceutil/trace.go:171","msg":"trace[1195434988] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1327; }","duration":"106.594272ms","start":"2024-09-06T18:31:52.028298Z","end":"2024-09-06T18:31:52.134892Z","steps":["trace[1195434988] 'range keys from in-memory index tree'  (duration: 105.896869ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:33:31.487393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.022994ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:33:31.487648Z","caller":"traceutil/trace.go:171","msg":"trace[540771186] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1604; }","duration":"126.572364ms","start":"2024-09-06T18:33:31.361051Z","end":"2024-09-06T18:33:31.487623Z","steps":["trace[540771186] 'range keys from in-memory index tree'  (duration: 125.998778ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:12.415897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1916}
	{"level":"info","ts":"2024-09-06T18:40:12.521219Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1916,"took":"103.58947ms","hash":4238645578,"current-db-size-bytes":8912896,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5062656,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-06T18:40:12.521816Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4238645578,"revision":1916,"compact-revision":-1}
	{"level":"info","ts":"2024-09-06T18:42:31.368272Z","caller":"traceutil/trace.go:171","msg":"trace[1511634821] transaction","detail":"{read_only:false; response_revision:2929; number_of_response:1; }","duration":"303.757087ms","start":"2024-09-06T18:42:31.064465Z","end":"2024-09-06T18:42:31.368222Z","steps":["trace[1511634821] 'process raft request'  (duration: 303.296271ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:42:31.368648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:42:31.064452Z","time spent":"303.968838ms","remote":"127.0.0.1:58470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2901 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [d4cabb69d44c] <==
	2024/09/06 18:33:48 Ready to write response ...
	2024/09/06 18:41:56 Ready to marshal response ...
	2024/09/06 18:41:56 Ready to write response ...
	2024/09/06 18:41:57 Ready to marshal response ...
	2024/09/06 18:41:57 Ready to write response ...
	2024/09/06 18:41:57 Ready to marshal response ...
	2024/09/06 18:41:57 Ready to write response ...
	2024/09/06 18:41:57 Ready to marshal response ...
	2024/09/06 18:41:57 Ready to write response ...
	2024/09/06 18:42:01 Ready to marshal response ...
	2024/09/06 18:42:01 Ready to write response ...
	2024/09/06 18:42:12 Ready to marshal response ...
	2024/09/06 18:42:12 Ready to write response ...
	2024/09/06 18:42:14 Ready to marshal response ...
	2024/09/06 18:42:14 Ready to write response ...
	2024/09/06 18:42:26 Ready to marshal response ...
	2024/09/06 18:42:26 Ready to write response ...
	2024/09/06 18:42:26 Ready to marshal response ...
	2024/09/06 18:42:26 Ready to write response ...
	2024/09/06 18:42:26 Ready to marshal response ...
	2024/09/06 18:42:26 Ready to write response ...
	2024/09/06 18:42:40 Ready to marshal response ...
	2024/09/06 18:42:40 Ready to write response ...
	2024/09/06 18:42:49 Ready to marshal response ...
	2024/09/06 18:42:49 Ready to write response ...
	
	
	==> kernel <==
	 18:43:03 up 13 min,  0 users,  load average: 1.01, 0.87, 0.72
	Linux addons-009491 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d364e3b40513] <==
	W0906 18:33:40.736328       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0906 18:33:41.042134       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0906 18:33:41.363166       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0906 18:42:07.221984       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:42:10.614715       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0906 18:42:26.921285       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.208.154"}
	E0906 18:42:28.304175       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0906 18:42:30.338047       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:30.338146       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:30.374929       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:30.375032       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:30.535179       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:30.538156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:30.641735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:30.641781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:30.675990       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:30.676043       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:42:31.645203       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:42:31.678138       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 18:42:31.697018       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0906 18:42:37.317232       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0906 18:42:38.343705       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0906 18:42:40.427320       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 18:42:40.595791       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.42.161"}
	I0906 18:42:50.056904       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.53.175"}
	
	
	==> kube-controller-manager [43711b1c2a18] <==
	I0906 18:42:49.968728       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="440.873µs"
	W0906 18:42:50.840180       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:50.840239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:51.246242       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0906 18:42:51.246447       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:42:51.381171       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0906 18:42:51.381282       1 shared_informer.go:320] Caches are synced for garbage collector
	W0906 18:42:51.624157       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:51.624395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:51.947405       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:51.947443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:52.370704       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0906 18:42:52.373944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.039µs"
	I0906 18:42:52.385994       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0906 18:42:52.763836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.492662ms"
	I0906 18:42:52.764902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.224µs"
	W0906 18:42:53.451823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:53.451867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:56.226344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:56.226436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:57.901718       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:57.901767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:43:00.658204       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0906 18:43:02.393790       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0906 18:43:02.495645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="3.218µs"
	
	
	==> kube-proxy [2f44f6d895cb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:30:23.218615       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:30:23.241894       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0906 18:30:23.241986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:23.377292       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:30:23.377325       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:30:23.377353       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:23.389169       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:23.389460       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:23.389472       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:23.392615       1 config.go:197] "Starting service config controller"
	I0906 18:30:23.392627       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:23.392687       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:23.392691       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:23.393070       1 config.go:326] "Starting node config controller"
	I0906 18:30:23.393077       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:23.493013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:30:23.493060       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:23.511298       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f0384ff7ebd] <==
	E0906 18:30:13.828399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:13.828465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:13.828502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0906 18:30:13.828665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:13.827238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:13.828993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:13.827755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 18:30:13.829332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:14.641978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:30:14.642205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:14.669272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:14.669597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:14.756289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:30:14.756620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:14.835019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:14.835293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:14.870831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:30:14.870994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:14.988140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:14.988193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:15.065828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:15.066782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:15.215338       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:30:15.215410       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 18:30:16.915584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:42:55 addons-009491 kubelet[1978]: I0906 18:42:55.836011    1978 scope.go:117] "RemoveContainer" containerID="b687d5ecd5a56c1616111567ecdaa0dedb6ba32cb280ab1ccdd71f87d1627038"
	Sep 06 18:42:55 addons-009491 kubelet[1978]: E0906 18:42:55.837047    1978 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b687d5ecd5a56c1616111567ecdaa0dedb6ba32cb280ab1ccdd71f87d1627038" containerID="b687d5ecd5a56c1616111567ecdaa0dedb6ba32cb280ab1ccdd71f87d1627038"
	Sep 06 18:42:55 addons-009491 kubelet[1978]: I0906 18:42:55.837079    1978 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b687d5ecd5a56c1616111567ecdaa0dedb6ba32cb280ab1ccdd71f87d1627038"} err="failed to get container status \"b687d5ecd5a56c1616111567ecdaa0dedb6ba32cb280ab1ccdd71f87d1627038\": rpc error: code = Unknown desc = Error response from daemon: No such container: b687d5ecd5a56c1616111567ecdaa0dedb6ba32cb280ab1ccdd71f87d1627038"
	Sep 06 18:42:55 addons-009491 kubelet[1978]: I0906 18:42:55.927492    1978 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4z8d6\" (UniqueName: \"kubernetes.io/projected/dc6cfbb3-de2a-4d3b-864a-9aed020159eb-kube-api-access-4z8d6\") on node \"addons-009491\" DevicePath \"\""
	Sep 06 18:42:55 addons-009491 kubelet[1978]: I0906 18:42:55.927597    1978 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc6cfbb3-de2a-4d3b-864a-9aed020159eb-webhook-cert\") on node \"addons-009491\" DevicePath \"\""
	Sep 06 18:42:56 addons-009491 kubelet[1978]: I0906 18:42:56.228043    1978 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6cfbb3-de2a-4d3b-864a-9aed020159eb" path="/var/lib/kubelet/pods/dc6cfbb3-de2a-4d3b-864a-9aed020159eb/volumes"
	Sep 06 18:42:58 addons-009491 kubelet[1978]: E0906 18:42:58.216932    1978 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="bca66a5c-ad63-4f9f-84b5-aeee51f2aca6"
	Sep 06 18:43:01 addons-009491 kubelet[1978]: E0906 18:43:01.216490    1978 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="20e29c8a-d436-46dd-90f5-7d3417468fe3"
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.271263    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9p9g\" (UniqueName: \"kubernetes.io/projected/bca66a5c-ad63-4f9f-84b5-aeee51f2aca6-kube-api-access-f9p9g\") pod \"bca66a5c-ad63-4f9f-84b5-aeee51f2aca6\" (UID: \"bca66a5c-ad63-4f9f-84b5-aeee51f2aca6\") "
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.271299    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bca66a5c-ad63-4f9f-84b5-aeee51f2aca6-gcp-creds\") pod \"bca66a5c-ad63-4f9f-84b5-aeee51f2aca6\" (UID: \"bca66a5c-ad63-4f9f-84b5-aeee51f2aca6\") "
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.271370    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bca66a5c-ad63-4f9f-84b5-aeee51f2aca6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bca66a5c-ad63-4f9f-84b5-aeee51f2aca6" (UID: "bca66a5c-ad63-4f9f-84b5-aeee51f2aca6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.275804    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca66a5c-ad63-4f9f-84b5-aeee51f2aca6-kube-api-access-f9p9g" (OuterVolumeSpecName: "kube-api-access-f9p9g") pod "bca66a5c-ad63-4f9f-84b5-aeee51f2aca6" (UID: "bca66a5c-ad63-4f9f-84b5-aeee51f2aca6"). InnerVolumeSpecName "kube-api-access-f9p9g". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.372106    1978 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f9p9g\" (UniqueName: \"kubernetes.io/projected/bca66a5c-ad63-4f9f-84b5-aeee51f2aca6-kube-api-access-f9p9g\") on node \"addons-009491\" DevicePath \"\""
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.372153    1978 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bca66a5c-ad63-4f9f-84b5-aeee51f2aca6-gcp-creds\") on node \"addons-009491\" DevicePath \"\""
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.975713    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq8lm\" (UniqueName: \"kubernetes.io/projected/110dd636-029b-4474-abd2-864399927b41-kube-api-access-tq8lm\") pod \"110dd636-029b-4474-abd2-864399927b41\" (UID: \"110dd636-029b-4474-abd2-864399927b41\") "
	Sep 06 18:43:02 addons-009491 kubelet[1978]: I0906 18:43:02.983454    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110dd636-029b-4474-abd2-864399927b41-kube-api-access-tq8lm" (OuterVolumeSpecName: "kube-api-access-tq8lm") pod "110dd636-029b-4474-abd2-864399927b41" (UID: "110dd636-029b-4474-abd2-864399927b41"). InnerVolumeSpecName "kube-api-access-tq8lm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.004249    1978 scope.go:117] "RemoveContainer" containerID="84c9fd42da2b0b97e089ac8bf910e51edef2bdb6336b72294e1341c108b8844d"
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.073508    1978 scope.go:117] "RemoveContainer" containerID="ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489"
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.077742    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9mg5\" (UniqueName: \"kubernetes.io/projected/4c990573-82b7-4c3e-aa76-d699dd353669-kube-api-access-m9mg5\") pod \"4c990573-82b7-4c3e-aa76-d699dd353669\" (UID: \"4c990573-82b7-4c3e-aa76-d699dd353669\") "
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.078390    1978 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tq8lm\" (UniqueName: \"kubernetes.io/projected/110dd636-029b-4474-abd2-864399927b41-kube-api-access-tq8lm\") on node \"addons-009491\" DevicePath \"\""
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.083848    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c990573-82b7-4c3e-aa76-d699dd353669-kube-api-access-m9mg5" (OuterVolumeSpecName: "kube-api-access-m9mg5") pod "4c990573-82b7-4c3e-aa76-d699dd353669" (UID: "4c990573-82b7-4c3e-aa76-d699dd353669"). InnerVolumeSpecName "kube-api-access-m9mg5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.117069    1978 scope.go:117] "RemoveContainer" containerID="ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489"
	Sep 06 18:43:03 addons-009491 kubelet[1978]: E0906 18:43:03.118762    1978 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489" containerID="ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489"
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.118875    1978 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489"} err="failed to get container status \"ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489\": rpc error: code = Unknown desc = Error response from daemon: No such container: ffa735224580677b3eb990ce94067ecde763c5a1ecd3a6eda51225b1e0f77489"
	Sep 06 18:43:03 addons-009491 kubelet[1978]: I0906 18:43:03.179867    1978 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-m9mg5\" (UniqueName: \"kubernetes.io/projected/4c990573-82b7-4c3e-aa76-d699dd353669-kube-api-access-m9mg5\") on node \"addons-009491\" DevicePath \"\""
	
	
	==> storage-provisioner [593f3b8b6895] <==
	I0906 18:30:29.676912       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:29.895768       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:29.895850       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:29.927446       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:29.928769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-009491_1d798c65-ce19-4f34-acbb-0438d19a2560!
	I0906 18:30:29.928860       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44b84a7c-b0a2-40f4-8474-3d6a5303bbbd", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-009491_1d798c65-ce19-4f34-acbb-0438d19a2560 became leader
	I0906 18:30:30.029641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-009491_1d798c65-ce19-4f34-acbb-0438d19a2560!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-009491 -n addons-009491
helpers_test.go:261: (dbg) Run:  kubectl --context addons-009491 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-009491 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-009491 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-009491/192.168.39.227
	Start Time:       Fri, 06 Sep 2024 18:33:48 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9rhf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l9rhf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-009491
	  Normal   Pulling    7m42s (x4 over 9m15s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (105.253091ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.11s)

                                                
                                    

Test pass (308/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.0/json-events 3.97
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.05
18 TestDownloadOnly/v1.31.0/DeleteAll 0.12
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.57
22 TestOffline 96.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 218.28
29 TestAddons/serial/Volcano 41.72
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 19.28
35 TestAddons/parallel/InspektorGadget 11.73
36 TestAddons/parallel/MetricsServer 6.74
37 TestAddons/parallel/HelmTiller 11.86
39 TestAddons/parallel/CSI 40.07
40 TestAddons/parallel/Headlamp 13.94
41 TestAddons/parallel/CloudSpanner 6.46
42 TestAddons/parallel/LocalPath 58.19
43 TestAddons/parallel/NvidiaDevicePlugin 6.4
44 TestAddons/parallel/Yakd 10.66
45 TestAddons/StoppedEnableDisable 13.55
46 TestCertOptions 71.2
47 TestCertExpiration 343.13
48 TestDockerFlags 74.25
49 TestForceSystemdFlag 76.46
50 TestForceSystemdEnv 90.02
52 TestKVMDriverInstallOrUpdate 5.85
56 TestErrorSpam/setup 47.35
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.7
59 TestErrorSpam/pause 1.2
60 TestErrorSpam/unpause 1.38
61 TestErrorSpam/stop 15.25
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 88.32
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.82
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.3
73 TestFunctional/serial/CacheCmd/cache/add_local 1.21
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.09
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 41.12
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.99
84 TestFunctional/serial/LogsFileCmd 0.99
85 TestFunctional/serial/InvalidService 4.8
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 43.14
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 0.86
95 TestFunctional/parallel/ServiceCmdConnect 8.59
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 50.11
99 TestFunctional/parallel/SSHCmd 0.35
100 TestFunctional/parallel/CpCmd 1.2
101 TestFunctional/parallel/MySQL 32.1
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.15
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
123 TestFunctional/parallel/ProfileCmd/profile_list 0.32
124 TestFunctional/parallel/MountCmd/any-port 8.26
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.63
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
133 TestFunctional/parallel/ImageCommands/Setup 1.53
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.75
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
141 TestFunctional/parallel/MountCmd/specific-port 1.99
142 TestFunctional/parallel/DockerEnv/bash 0.81
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
146 TestFunctional/parallel/ServiceCmd/List 0.34
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
150 TestFunctional/parallel/ServiceCmd/Format 0.36
151 TestFunctional/parallel/ServiceCmd/URL 0.29
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.01
155 TestGvisorAddon 243.07
158 TestMultiControlPlane/serial/StartCluster 219.38
159 TestMultiControlPlane/serial/DeployApp 6.48
160 TestMultiControlPlane/serial/PingHostFromPods 1.25
161 TestMultiControlPlane/serial/AddWorkerNode 63.25
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
164 TestMultiControlPlane/serial/CopyFile 12.53
165 TestMultiControlPlane/serial/StopSecondaryNode 13.17
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
167 TestMultiControlPlane/serial/RestartSecondaryNode 158.76
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.51
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 230.29
170 TestMultiControlPlane/serial/DeleteSecondaryNode 7.02
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/StopCluster 39.09
173 TestMultiControlPlane/serial/RestartCluster 127.8
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
175 TestMultiControlPlane/serial/AddSecondaryNode 93.42
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
179 TestImageBuild/serial/Setup 51.84
180 TestImageBuild/serial/NormalBuild 2.11
181 TestImageBuild/serial/BuildWithBuildArg 1.18
182 TestImageBuild/serial/BuildWithDockerIgnore 1.07
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.8
187 TestJSONOutput/start/Command 59.88
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.57
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.52
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 13.35
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.17
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 103.69
219 TestMountStart/serial/StartWithMountFirst 32.53
220 TestMountStart/serial/VerifyMountFirst 0.36
221 TestMountStart/serial/StartWithMountSecond 33.66
222 TestMountStart/serial/VerifyMountSecond 0.39
223 TestMountStart/serial/DeleteFirst 0.66
224 TestMountStart/serial/VerifyMountPostDelete 0.37
225 TestMountStart/serial/Stop 2.27
226 TestMountStart/serial/RestartStopped 26.86
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 128.37
231 TestMultiNode/serial/DeployApp2Nodes 5.07
232 TestMultiNode/serial/PingHostFrom2Pods 0.77
233 TestMultiNode/serial/AddNode 58.74
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.2
236 TestMultiNode/serial/CopyFile 6.85
237 TestMultiNode/serial/StopNode 3.36
238 TestMultiNode/serial/StartAfterStop 42.22
239 TestMultiNode/serial/RestartKeepsNodes 193.42
240 TestMultiNode/serial/DeleteNode 2.17
241 TestMultiNode/serial/StopMultiNode 25.08
242 TestMultiNode/serial/RestartMultiNode 102.35
243 TestMultiNode/serial/ValidateNameConflict 51.12
248 TestPreload 187.35
250 TestScheduledStopUnix 123.93
251 TestSkaffold 129.92
254 TestRunningBinaryUpgrade 135.85
256 TestKubernetesUpgrade 214
269 TestStoppedBinaryUpgrade/Setup 0.5
270 TestStoppedBinaryUpgrade/Upgrade 119.61
272 TestPause/serial/Start 79.6
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
283 TestNoKubernetes/serial/StartWithK8s 91.14
284 TestNetworkPlugins/group/auto/Start 119.82
285 TestNetworkPlugins/group/kindnet/Start 125.81
286 TestPause/serial/SecondStartNoReconfiguration 108.96
287 TestNoKubernetes/serial/StartWithStopK8s 33.95
288 TestNoKubernetes/serial/Start 28.88
289 TestNetworkPlugins/group/auto/KubeletFlags 0.23
290 TestNetworkPlugins/group/auto/NetCatPod 10.33
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
292 TestNoKubernetes/serial/ProfileList 1.37
293 TestNoKubernetes/serial/Stop 2.3
294 TestNoKubernetes/serial/StartNoArgs 27.64
295 TestNetworkPlugins/group/auto/DNS 0.16
296 TestNetworkPlugins/group/auto/Localhost 0.16
297 TestNetworkPlugins/group/auto/HairPin 0.15
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
300 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
301 TestPause/serial/Pause 1.11
302 TestNetworkPlugins/group/calico/Start 90.21
303 TestPause/serial/VerifyStatus 0.26
304 TestPause/serial/Unpause 0.52
305 TestPause/serial/PauseAgain 0.7
306 TestNetworkPlugins/group/kindnet/DNS 0.18
307 TestNetworkPlugins/group/kindnet/Localhost 0.17
308 TestPause/serial/DeletePaused 1.05
309 TestNetworkPlugins/group/kindnet/HairPin 0.15
310 TestPause/serial/VerifyDeletedResources 0.42
311 TestNetworkPlugins/group/custom-flannel/Start 99.87
312 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
313 TestNetworkPlugins/group/false/Start 115.04
314 TestNetworkPlugins/group/enable-default-cni/Start 135.42
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.23
317 TestNetworkPlugins/group/calico/NetCatPod 11.26
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
320 TestNetworkPlugins/group/calico/DNS 0.19
321 TestNetworkPlugins/group/calico/Localhost 0.16
322 TestNetworkPlugins/group/calico/HairPin 0.15
323 TestNetworkPlugins/group/custom-flannel/DNS 0.24
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
326 TestNetworkPlugins/group/false/KubeletFlags 0.26
327 TestNetworkPlugins/group/false/NetCatPod 12.3
328 TestNetworkPlugins/group/flannel/Start 68.81
329 TestNetworkPlugins/group/bridge/Start 88.33
330 TestNetworkPlugins/group/false/DNS 0.19
331 TestNetworkPlugins/group/false/Localhost 0.15
332 TestNetworkPlugins/group/false/HairPin 0.15
333 TestNetworkPlugins/group/kubenet/Start 96.81
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
340 TestStartStop/group/old-k8s-version/serial/FirstStart 177.56
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
343 TestNetworkPlugins/group/flannel/NetCatPod 10.21
344 TestNetworkPlugins/group/flannel/DNS 0.21
345 TestNetworkPlugins/group/flannel/Localhost 0.24
346 TestNetworkPlugins/group/flannel/HairPin 0.2
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
348 TestNetworkPlugins/group/bridge/NetCatPod 10.24
350 TestStartStop/group/no-preload/serial/FirstStart 111.72
351 TestNetworkPlugins/group/bridge/DNS 0.17
352 TestNetworkPlugins/group/bridge/Localhost 0.16
353 TestNetworkPlugins/group/bridge/HairPin 0.15
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.23
355 TestNetworkPlugins/group/kubenet/NetCatPod 10.28
357 TestStartStop/group/embed-certs/serial/FirstStart 115.22
358 TestNetworkPlugins/group/kubenet/DNS 0.14
359 TestNetworkPlugins/group/kubenet/Localhost 0.13
360 TestNetworkPlugins/group/kubenet/HairPin 0.14
362 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.65
363 TestStartStop/group/no-preload/serial/DeployApp 9.32
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
365 TestStartStop/group/no-preload/serial/Stop 13.42
366 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
367 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
368 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
369 TestStartStop/group/no-preload/serial/SecondStart 299.98
370 TestStartStop/group/embed-certs/serial/DeployApp 11.3
371 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.99
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
373 TestStartStop/group/old-k8s-version/serial/Stop 13.39
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.37
375 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
376 TestStartStop/group/embed-certs/serial/Stop 13.36
377 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
378 TestStartStop/group/old-k8s-version/serial/SecondStart 409.64
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 332.38
381 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
382 TestStartStop/group/embed-certs/serial/SecondStart 339.64
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
386 TestStartStop/group/no-preload/serial/Pause 2.46
388 TestStartStop/group/newest-cni/serial/FirstStart 60.76
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.51
393 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
395 TestStartStop/group/newest-cni/serial/DeployApp 0
396 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
397 TestStartStop/group/newest-cni/serial/Stop 12.73
398 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
399 TestStartStop/group/embed-certs/serial/Pause 2.42
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
401 TestStartStop/group/newest-cni/serial/SecondStart 36.92
402 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
403 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
405 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
406 TestStartStop/group/newest-cni/serial/Pause 2.14
407 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
408 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
409 TestStartStop/group/old-k8s-version/serial/Pause 2.21
x
+
TestDownloadOnly/v1.20.0/json-events (8.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-162018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-162018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (8.861027276s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-162018
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-162018: exit status 85 (52.024431ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-162018 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |          |
	|         | -p download-only-162018        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:14
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:14.230961   13296 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:14.231109   13296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:14.231120   13296 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:14.231127   13296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:14.231308   13296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	W0906 18:29:14.231421   13296 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19576-6054/.minikube/config/config.json: open /home/jenkins/minikube-integration/19576-6054/.minikube/config/config.json: no such file or directory
	I0906 18:29:14.231939   13296 out.go:352] Setting JSON to true
	I0906 18:29:14.232844   13296 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":701,"bootTime":1725646653,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:14.232899   13296 start.go:139] virtualization: kvm guest
	I0906 18:29:14.235070   13296 out.go:97] [download-only-162018] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 18:29:14.235153   13296 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-6054/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:29:14.235206   13296 notify.go:220] Checking for updates...
	I0906 18:29:14.236386   13296 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:14.237414   13296 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:14.238399   13296 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	I0906 18:29:14.239528   13296 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:29:14.240615   13296 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 18:29:14.242631   13296 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 18:29:14.242837   13296 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:14.335416   13296 out.go:97] Using the kvm2 driver based on user configuration
	I0906 18:29:14.335456   13296 start.go:297] selected driver: kvm2
	I0906 18:29:14.335466   13296 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:29:14.335757   13296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:14.335869   13296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6054/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:29:14.349822   13296 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:29:14.349865   13296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:14.350352   13296 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0906 18:29:14.350529   13296 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 18:29:14.350554   13296 cni.go:84] Creating CNI manager for ""
	I0906 18:29:14.350565   13296 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 18:29:14.350607   13296 start.go:340] cluster config:
	{Name:download-only-162018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-162018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:29:14.350756   13296 iso.go:125] acquiring lock: {Name:mk05313ecb02befdc19949aecb1e2b6c72ebbece Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:14.352535   13296 out.go:97] Downloading VM boot image ...
	I0906 18:29:14.352559   13296 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19576-6054/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:29:18.032366   13296 out.go:97] Starting "download-only-162018" primary control-plane node in "download-only-162018" cluster
	I0906 18:29:18.032390   13296 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 18:29:18.059129   13296 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0906 18:29:18.059157   13296 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:18.059363   13296 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 18:29:18.060878   13296 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 18:29:18.060897   13296 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 18:29:18.087232   13296 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19576-6054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-162018 host does not exist
	  To start a cluster, run: "minikube start -p download-only-162018"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-162018
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (3.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-214882 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-214882 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 : (3.968640881s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (3.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-214882
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-214882: exit status 85 (51.385623ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-162018 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-162018        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-162018        | download-only-162018 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | download-only-214882 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-214882        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:23.385070   13509 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:23.385151   13509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:23.385158   13509 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:23.385163   13509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:23.385339   13509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 18:29:23.385830   13509 out.go:352] Setting JSON to true
	I0906 18:29:23.386592   13509 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":710,"bootTime":1725646653,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:23.387659   13509 start.go:139] virtualization: kvm guest
	I0906 18:29:23.389323   13509 out.go:97] [download-only-214882] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:29:23.389436   13509 notify.go:220] Checking for updates...
	I0906 18:29:23.390493   13509 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:23.391755   13509 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:23.392862   13509 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	I0906 18:29:23.393928   13509 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:29:23.394899   13509 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-214882 host does not exist
	  To start a cluster, run: "minikube start -p download-only-214882"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-214882
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-748579 --alsologtostderr --binary-mirror http://127.0.0.1:43707 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-748579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-748579
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (96.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-515714 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-515714 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m35.073986232s)
helpers_test.go:175: Cleaning up "offline-docker-515714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-515714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-515714: (1.051299452s)
--- PASS: TestOffline (96.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-009491
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-009491: exit status 85 (52.538971ms)

                                                
                                                
-- stdout --
	* Profile "addons-009491" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-009491"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-009491
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-009491: exit status 85 (51.78066ms)

                                                
                                                
-- stdout --
	* Profile "addons-009491" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-009491"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (218.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-009491 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-009491 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m38.284600336s)
--- PASS: TestAddons/Setup (218.28s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 19.067426ms
addons_test.go:913: volcano-controller stabilized in 19.113504ms
addons_test.go:905: volcano-admission stabilized in 19.265302ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-vnhgc" [9e9fda94-7700-4671-8a10-d6cc796465a4] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004112008s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-556gz" [777ce625-01cc-4946-be34-5c3f8da83b8f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004050492s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jm47m" [0aac489b-6063-4b0e-ae09-377e5f5450c6] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002889714s
addons_test.go:932: (dbg) Run:  kubectl --context addons-009491 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-009491 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-009491 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [358ca087-5cdd-4bbd-b4a4-087caa2d2805] Pending
helpers_test.go:344: "test-job-nginx-0" [358ca087-5cdd-4bbd-b4a4-087caa2d2805] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [358ca087-5cdd-4bbd-b4a4-087caa2d2805] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003733982s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable volcano --alsologtostderr -v=1: (10.331927961s)
--- PASS: TestAddons/serial/Volcano (41.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-009491 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-009491 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-009491 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-009491 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-009491 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0d28bf22-4b59-49a7-b992-e31f88b90b5a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0d28bf22-4b59-49a7-b992-e31f88b90b5a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004716052s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-009491 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.227
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable ingress-dns --alsologtostderr -v=1: (1.498641801s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable ingress --alsologtostderr -v=1: (7.676747447s)
--- PASS: TestAddons/parallel/Ingress (19.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kvrz6" [c643671a-e127-47fc-8738-79dae134f902] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004369359s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-009491
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-009491: (5.726271373s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.746237ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-nwfbc" [4d7b0eb8-eecc-4249-ae40-724b201c811f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00466524s
addons_test.go:417: (dbg) Run:  kubectl --context addons-009491 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.755499ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-ht46b" [cac5bee2-4204-4335-97e5-73d41d66719f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004823984s
addons_test.go:475: (dbg) Run:  kubectl --context addons-009491 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-009491 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.221133148s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.729831ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-009491 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-009491 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f8d1c392-a624-45d6-ac11-a9c2f5b6b896] Pending
helpers_test.go:344: "task-pv-pod" [f8d1c392-a624-45d6-ac11-a9c2f5b6b896] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f8d1c392-a624-45d6-ac11-a9c2f5b6b896] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004613604s
addons_test.go:590: (dbg) Run:  kubectl --context addons-009491 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-009491 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-009491 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-009491 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-009491 delete pod task-pv-pod: (1.105658803s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-009491 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-009491 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-009491 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6d31f584-c97d-4db2-bc7b-1413b0728000] Pending
helpers_test.go:344: "task-pv-pod-restore" [6d31f584-c97d-4db2-bc7b-1413b0728000] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6d31f584-c97d-4db2-bc7b-1413b0728000] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004279089s
addons_test.go:632: (dbg) Run:  kubectl --context addons-009491 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-009491 delete pod task-pv-pod-restore: (1.005393824s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-009491 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-009491 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.687505113s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable volumesnapshots --alsologtostderr -v=1: (1.032074829s)
--- PASS: TestAddons/parallel/CSI (40.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-009491 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-cqpqz" [b247e472-8618-487e-b1c5-cd904b7f10b7] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-cqpqz" [b247e472-8618-487e-b1c5-cd904b7f10b7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-cqpqz" [b247e472-8618-487e-b1c5-cd904b7f10b7] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004908228s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (13.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-6gcqw" [20cd5c4a-53d6-4cbe-9080-06789dcc4fc2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004655862s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-009491
--- PASS: TestAddons/parallel/CloudSpanner (6.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-009491 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-009491 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0a5455a8-7952-4101-ba99-355606e1a856] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0a5455a8-7952-4101-ba99-355606e1a856] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0a5455a8-7952-4101-ba99-355606e1a856] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005125957s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-009491 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 ssh "cat /opt/local-path-provisioner/pvc-a78db530-dc97-4b7f-a847-310a42db2e7a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-009491 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-009491 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.409695795s)
--- PASS: TestAddons/parallel/LocalPath (58.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dq95c" [6be1300d-9afe-427c-8d48-8f641e89139a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004273454s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-009491
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-q5245" [0a1b75be-1ffd-4f23-a1fa-8e98fa198f4d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005069352s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-009491 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-009491 addons disable yakd --alsologtostderr -v=1: (5.653288189s)
--- PASS: TestAddons/parallel/Yakd (10.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-009491
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-009491: (13.300380142s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-009491
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-009491
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-009491
--- PASS: TestAddons/StoppedEnableDisable (13.55s)

                                                
                                    
x
+
TestCertOptions (71.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-329123 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-329123 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m9.684231505s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-329123 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-329123 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-329123 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-329123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-329123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-329123: (1.042404214s)
--- PASS: TestCertOptions (71.20s)

                                                
                                    
x
+
TestCertExpiration (343.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-562559 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-562559 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m47.008785746s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-562559 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0906 19:32:49.163476   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.169859   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.181366   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.202852   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.244302   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.326194   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.487706   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:49.809482   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:50.451612   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:51.733577   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-562559 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (55.076632055s)
helpers_test.go:175: Cleaning up "cert-expiration-562559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-562559
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-562559: (1.047527386s)
--- PASS: TestCertExpiration (343.13s)

                                                
                                    
x
+
TestDockerFlags (74.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-352831 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-352831 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m12.638449342s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-352831 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-352831 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-352831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-352831
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-352831: (1.187048233s)
--- PASS: TestDockerFlags (74.25s)

                                                
                                    
x
+
TestForceSystemdFlag (76.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-694946 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-694946 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m15.472827106s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-694946 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-694946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-694946
--- PASS: TestForceSystemdFlag (76.46s)

                                                
                                    
x
+
TestForceSystemdEnv (90.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-096649 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-096649 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m28.733211299s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-096649 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-096649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-096649
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-096649: (1.015427681s)
--- PASS: TestForceSystemdEnv (90.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.85s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.85s)

                                                
                                    
x
+
TestErrorSpam/setup (47.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-143492 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-143492 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-143492 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-143492 --driver=kvm2 : (47.346271385s)
--- PASS: TestErrorSpam/setup (47.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (15.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 stop: (12.453672088s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 stop: (1.672594916s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-143492 --log_dir /tmp/nospam-143492 stop: (1.127035615s)
--- PASS: TestErrorSpam/stop (15.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19576-6054/.minikube/files/etc/test/nested/copy/13284/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-745007 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-745007 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m28.323307902s)
--- PASS: TestFunctional/serial/StartWithProxy (88.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-745007 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-745007 --alsologtostderr -v=8: (40.820364063s)
functional_test.go:663: soft start took 40.821061084s for "functional-745007" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-745007 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-745007 /tmp/TestFunctionalserialCacheCmdcacheadd_local2393071886/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cache add minikube-local-cache-test:functional-745007
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cache delete minikube-local-cache-test:functional-745007
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-745007
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.5696ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 kubectl -- --context functional-745007 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-745007 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-745007 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-745007 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.118976026s)
functional_test.go:761: restart took 41.119084497s for "functional-745007" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-745007 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 logs --file /tmp/TestFunctionalserialLogsFileCmd757944300/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-745007 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-745007
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-745007: exit status 115 (272.890079ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.170:30478 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-745007 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-745007 delete -f testdata/invalidsvc.yaml: (1.3328569s)
--- PASS: TestFunctional/serial/InvalidService (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 config get cpus: exit status 14 (56.440432ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 config get cpus: exit status 14 (41.390059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (43.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-745007 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-745007 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23509: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (43.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-745007 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-745007 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (148.337752ms)

                                                
                                                
-- stdout --
	* [functional-745007] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:47:41.095264   23379 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:47:41.095387   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:47:41.095399   23379 out.go:358] Setting ErrFile to fd 2...
	I0906 18:47:41.095405   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:47:41.095584   23379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 18:47:41.096211   23379 out.go:352] Setting JSON to false
	I0906 18:47:41.097494   23379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1808,"bootTime":1725646653,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:47:41.097577   23379 start.go:139] virtualization: kvm guest
	I0906 18:47:41.099263   23379 out.go:177] * [functional-745007] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:47:41.100437   23379 notify.go:220] Checking for updates...
	I0906 18:47:41.100462   23379 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:47:41.101710   23379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:47:41.103010   23379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	I0906 18:47:41.104286   23379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:47:41.105425   23379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:47:41.106470   23379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:47:41.108106   23379 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:47:41.108684   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:47:41.108777   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:47:41.125759   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0906 18:47:41.126110   23379 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:47:41.126786   23379 main.go:141] libmachine: Using API Version  1
	I0906 18:47:41.126808   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:47:41.127128   23379 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:47:41.127413   23379 main.go:141] libmachine: (functional-745007) Calling .DriverName
	I0906 18:47:41.127664   23379 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:47:41.128185   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:47:41.128224   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:47:41.142795   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I0906 18:47:41.143285   23379 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:47:41.143998   23379 main.go:141] libmachine: Using API Version  1
	I0906 18:47:41.144023   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:47:41.144424   23379 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:47:41.144600   23379 main.go:141] libmachine: (functional-745007) Calling .DriverName
	I0906 18:47:41.182945   23379 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 18:47:41.184169   23379 start.go:297] selected driver: kvm2
	I0906 18:47:41.184187   23379 start.go:901] validating driver "kvm2" against &{Name:functional-745007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-745007 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:47:41.184336   23379 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:47:41.186279   23379 out.go:201] 
	W0906 18:47:41.187314   23379 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 18:47:41.188592   23379 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-745007 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-745007 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-745007 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (155.147648ms)

                                                
                                                
-- stdout --
	* [functional-745007] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:47:41.054944   23363 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:47:41.055086   23363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:47:41.055097   23363 out.go:358] Setting ErrFile to fd 2...
	I0906 18:47:41.055103   23363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:47:41.055514   23363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 18:47:41.056201   23363 out.go:352] Setting JSON to false
	I0906 18:47:41.057516   23363 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1808,"bootTime":1725646653,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:47:41.057618   23363 start.go:139] virtualization: kvm guest
	I0906 18:47:41.059892   23363 out.go:177] * [functional-745007] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0906 18:47:41.061137   23363 notify.go:220] Checking for updates...
	I0906 18:47:41.061184   23363 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:47:41.062445   23363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:47:41.064033   23363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	I0906 18:47:41.065306   23363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	I0906 18:47:41.066441   23363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:47:41.067600   23363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:47:41.069166   23363 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:47:41.069576   23363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:47:41.069617   23363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:47:41.086118   23363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0906 18:47:41.086677   23363 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:47:41.087384   23363 main.go:141] libmachine: Using API Version  1
	I0906 18:47:41.087411   23363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:47:41.087759   23363 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:47:41.087955   23363 main.go:141] libmachine: (functional-745007) Calling .DriverName
	I0906 18:47:41.088246   23363 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:47:41.088678   23363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:47:41.088712   23363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:47:41.108276   23363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I0906 18:47:41.108688   23363 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:47:41.109211   23363 main.go:141] libmachine: Using API Version  1
	I0906 18:47:41.109245   23363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:47:41.109611   23363 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:47:41.109855   23363 main.go:141] libmachine: (functional-745007) Calling .DriverName
	I0906 18:47:41.147532   23363 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0906 18:47:41.148591   23363 start.go:297] selected driver: kvm2
	I0906 18:47:41.148605   23363 start.go:901] validating driver "kvm2" against &{Name:functional-745007 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-745007 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:47:41.148749   23363 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:47:41.150907   23363 out.go:201] 
	W0906 18:47:41.152223   23363 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 18:47:41.153446   23363 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-745007 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-745007 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k6c9j" [f792c0a4-09a5-407e-9051-3d194e378b48] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k6c9j" [f792c0a4-09a5-407e-9051-3d194e378b48] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008229848s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.170:32001
functional_test.go:1675: http://192.168.39.170:32001: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-k6c9j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.170:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.170:32001
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [20618a31-02be-4b36-af49-b7a294a977c8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004676178s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-745007 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-745007 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-745007 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-745007 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-745007 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [60b1b749-1b30-42c7-88a4-2a0a03b5e928] Pending
helpers_test.go:344: "sp-pod" [60b1b749-1b30-42c7-88a4-2a0a03b5e928] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [60b1b749-1b30-42c7-88a4-2a0a03b5e928] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00605319s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-745007 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-745007 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-745007 delete -f testdata/storage-provisioner/pod.yaml: (1.124008284s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-745007 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3912a629-c1ce-442d-9d61-9391a7e417ee] Pending
helpers_test.go:344: "sp-pod" [3912a629-c1ce-442d-9d61-9391a7e417ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3912a629-c1ce-442d-9d61-9391a7e417ee] Running
E0906 18:48:11.873522   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.00354651s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-745007 exec sp-pod -- ls /tmp/mount
2024/09/06 18:48:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh -n functional-745007 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cp functional-745007:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1183913212/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh -n functional-745007 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh -n functional-745007 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-745007 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gd68r" [9c4347a9-0256-49b8-8edc-fdf8bc3a18ba] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gd68r" [9c4347a9-0256-49b8-8edc-fdf8bc3a18ba] Running
E0906 18:48:06.742416   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:06.749230   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:06.760547   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:06.781947   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:06.823488   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:06.904922   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:07.066470   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:07.388151   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:08.030057   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:09.312232   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.007667381s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-745007 exec mysql-6cdb49bbb-gd68r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-745007 exec mysql-6cdb49bbb-gd68r -- mysql -ppassword -e "show databases;": exit status 1 (287.095806ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-745007 exec mysql-6cdb49bbb-gd68r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-745007 exec mysql-6cdb49bbb-gd68r -- mysql -ppassword -e "show databases;": exit status 1 (153.943199ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-745007 exec mysql-6cdb49bbb-gd68r -- mysql -ppassword -e "show databases;"
E0906 18:48:16.995802   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (32.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13284/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /etc/test/nested/copy/13284/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13284.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /etc/ssl/certs/13284.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13284.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /usr/share/ca-certificates/13284.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/132842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /etc/ssl/certs/132842.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/132842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /usr/share/ca-certificates/132842.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-745007 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh "sudo systemctl is-active crio": exit status 1 (209.542799ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-745007 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-745007 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-7xp7f" [59225a55-4d47-4f5d-b2c4-0eff61295ed9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-7xp7f" [59225a55-4d47-4f5d-b2c4-0eff61295ed9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.013135726s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "276.50437ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "46.401459ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdany-port1877553235/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725648449319759066" to /tmp/TestFunctionalparallelMountCmdany-port1877553235/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725648449319759066" to /tmp/TestFunctionalparallelMountCmdany-port1877553235/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725648449319759066" to /tmp/TestFunctionalparallelMountCmdany-port1877553235/001/test-1725648449319759066
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.816014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 18:47 test-1725648449319759066
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh cat /mount-9p/test-1725648449319759066
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-745007 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [278942f9-dcf4-40db-a78f-8b65b530ac61] Pending
helpers_test.go:344: "busybox-mount" [278942f9-dcf4-40db-a78f-8b65b530ac61] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [278942f9-dcf4-40db-a78f-8b65b530ac61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [278942f9-dcf4-40db-a78f-8b65b530ac61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005728513s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-745007 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdany-port1877553235/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "245.685521ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.275239ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-745007 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-745007
docker.io/kicbase/echo-server:functional-745007
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-745007 image ls --format short --alsologtostderr:
I0906 18:47:48.116030   23612 out.go:345] Setting OutFile to fd 1 ...
I0906 18:47:48.116324   23612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:48.116337   23612 out.go:358] Setting ErrFile to fd 2...
I0906 18:47:48.116343   23612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:48.116597   23612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
I0906 18:47:48.117329   23612 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:48.117483   23612 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:48.118042   23612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:48.118095   23612 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:48.135165   23612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34919
I0906 18:47:48.135574   23612 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:48.136184   23612 main.go:141] libmachine: Using API Version  1
I0906 18:47:48.136203   23612 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:48.136580   23612 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:48.136809   23612 main.go:141] libmachine: (functional-745007) Calling .GetState
I0906 18:47:48.138762   23612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:48.138814   23612 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:48.153214   23612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
I0906 18:47:48.153647   23612 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:48.154058   23612 main.go:141] libmachine: Using API Version  1
I0906 18:47:48.154076   23612 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:48.154379   23612 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:48.154582   23612 main.go:141] libmachine: (functional-745007) Calling .DriverName
I0906 18:47:48.154776   23612 ssh_runner.go:195] Run: systemctl --version
I0906 18:47:48.154812   23612 main.go:141] libmachine: (functional-745007) Calling .GetSSHHostname
I0906 18:47:48.157211   23612 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:48.157596   23612 main.go:141] libmachine: (functional-745007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:1a", ip: ""} in network mk-functional-745007: {Iface:virbr1 ExpiryTime:2024-09-06 19:44:39 +0000 UTC Type:0 Mac:52:54:00:4c:20:1a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-745007 Clientid:01:52:54:00:4c:20:1a}
I0906 18:47:48.157626   23612 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined IP address 192.168.39.170 and MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:48.157769   23612 main.go:141] libmachine: (functional-745007) Calling .GetSSHPort
I0906 18:47:48.157924   23612 main.go:141] libmachine: (functional-745007) Calling .GetSSHKeyPath
I0906 18:47:48.158072   23612 main.go:141] libmachine: (functional-745007) Calling .GetSSHUsername
I0906 18:47:48.158193   23612 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/functional-745007/id_rsa Username:docker}
I0906 18:47:48.241416   23612 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 18:47:48.276384   23612 main.go:141] libmachine: Making call to close driver server
I0906 18:47:48.276395   23612 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:48.276662   23612 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:48.276680   23612 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:48.276691   23612 main.go:141] libmachine: Making call to close driver server
I0906 18:47:48.276700   23612 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:48.276923   23612 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:48.276947   23612 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:48.277013   23612 main.go:141] libmachine: (functional-745007) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-745007 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-745007 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| localhost/my-image                          | functional-745007 | 72f921b6f4486 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-745007 | b24a3fe61c67e | 30B    |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-745007 image ls --format table --alsologtostderr:
I0906 18:47:52.477274   23789 out.go:345] Setting OutFile to fd 1 ...
I0906 18:47:52.477402   23789 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:52.477413   23789 out.go:358] Setting ErrFile to fd 2...
I0906 18:47:52.477420   23789 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:52.477592   23789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
I0906 18:47:52.478123   23789 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:52.478212   23789 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:52.478564   23789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:52.478606   23789 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:52.493767   23789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
I0906 18:47:52.494392   23789 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:52.495090   23789 main.go:141] libmachine: Using API Version  1
I0906 18:47:52.495120   23789 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:52.495461   23789 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:52.495668   23789 main.go:141] libmachine: (functional-745007) Calling .GetState
I0906 18:47:52.497758   23789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:52.497808   23789 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:52.512726   23789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32995
I0906 18:47:52.513224   23789 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:52.513722   23789 main.go:141] libmachine: Using API Version  1
I0906 18:47:52.513745   23789 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:52.514111   23789 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:52.514257   23789 main.go:141] libmachine: (functional-745007) Calling .DriverName
I0906 18:47:52.514488   23789 ssh_runner.go:195] Run: systemctl --version
I0906 18:47:52.514527   23789 main.go:141] libmachine: (functional-745007) Calling .GetSSHHostname
I0906 18:47:52.517525   23789 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:52.517890   23789 main.go:141] libmachine: (functional-745007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:1a", ip: ""} in network mk-functional-745007: {Iface:virbr1 ExpiryTime:2024-09-06 19:44:39 +0000 UTC Type:0 Mac:52:54:00:4c:20:1a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-745007 Clientid:01:52:54:00:4c:20:1a}
I0906 18:47:52.517913   23789 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined IP address 192.168.39.170 and MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:52.518037   23789 main.go:141] libmachine: (functional-745007) Calling .GetSSHPort
I0906 18:47:52.518186   23789 main.go:141] libmachine: (functional-745007) Calling .GetSSHKeyPath
I0906 18:47:52.518328   23789 main.go:141] libmachine: (functional-745007) Calling .GetSSHUsername
I0906 18:47:52.518468   23789 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/functional-745007/id_rsa Username:docker}
I0906 18:47:52.629504   23789 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 18:47:52.686898   23789 main.go:141] libmachine: Making call to close driver server
I0906 18:47:52.686918   23789 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:52.687186   23789 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:52.687213   23789 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:52.687222   23789 main.go:141] libmachine: Making call to close driver server
I0906 18:47:52.687229   23789 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:52.687439   23789 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:52.687466   23789 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:52.687532   23789 main.go:141] libmachine: (functional-745007) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-745007 image ls --format json --alsologtostderr:
[{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b24a3fe61c67ecd2a29a2f889f0c009cc36bfd5cd5fcdd53ed44f900615b907a","repoDigests
":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-745007"],"size":"30"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"72f921b6f4486a28000f0a23ed18be4bc20763cc834922e84351db8062779141","repoDigests":[],"repoTags":["localhost/my-image:functional-745007"],"size":"1240000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/p
ause:3.3"],"size":"683000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-745007"],"size":"4940000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-745007 image ls --format json --alsologtostderr:
I0906 18:47:52.258420   23755 out.go:345] Setting OutFile to fd 1 ...
I0906 18:47:52.258660   23755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:52.258669   23755 out.go:358] Setting ErrFile to fd 2...
I0906 18:47:52.258674   23755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:52.258878   23755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
I0906 18:47:52.259414   23755 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:52.259527   23755 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:52.259928   23755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:52.259970   23755 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:52.274452   23755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37695
I0906 18:47:52.274968   23755 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:52.275643   23755 main.go:141] libmachine: Using API Version  1
I0906 18:47:52.275669   23755 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:52.276100   23755 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:52.276305   23755 main.go:141] libmachine: (functional-745007) Calling .GetState
I0906 18:47:52.278437   23755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:52.278480   23755 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:52.293757   23755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
I0906 18:47:52.294173   23755 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:52.294663   23755 main.go:141] libmachine: Using API Version  1
I0906 18:47:52.294687   23755 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:52.295105   23755 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:52.295335   23755 main.go:141] libmachine: (functional-745007) Calling .DriverName
I0906 18:47:52.295530   23755 ssh_runner.go:195] Run: systemctl --version
I0906 18:47:52.295550   23755 main.go:141] libmachine: (functional-745007) Calling .GetSSHHostname
I0906 18:47:52.298232   23755 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:52.298647   23755 main.go:141] libmachine: (functional-745007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:1a", ip: ""} in network mk-functional-745007: {Iface:virbr1 ExpiryTime:2024-09-06 19:44:39 +0000 UTC Type:0 Mac:52:54:00:4c:20:1a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-745007 Clientid:01:52:54:00:4c:20:1a}
I0906 18:47:52.298668   23755 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined IP address 192.168.39.170 and MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:52.298890   23755 main.go:141] libmachine: (functional-745007) Calling .GetSSHPort
I0906 18:47:52.299044   23755 main.go:141] libmachine: (functional-745007) Calling .GetSSHKeyPath
I0906 18:47:52.299175   23755 main.go:141] libmachine: (functional-745007) Calling .GetSSHUsername
I0906 18:47:52.299305   23755 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/functional-745007/id_rsa Username:docker}
I0906 18:47:52.397362   23755 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 18:47:52.430715   23755 main.go:141] libmachine: Making call to close driver server
I0906 18:47:52.430730   23755 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:52.430987   23755 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:52.431016   23755 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:52.431025   23755 main.go:141] libmachine: Making call to close driver server
I0906 18:47:52.431032   23755 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:52.431241   23755 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:52.431256   23755 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-745007 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-745007
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: b24a3fe61c67ecd2a29a2f889f0c009cc36bfd5cd5fcdd53ed44f900615b907a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-745007
size: "30"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-745007 image ls --format yaml --alsologtostderr:
I0906 18:47:48.320451   23636 out.go:345] Setting OutFile to fd 1 ...
I0906 18:47:48.320906   23636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:48.320951   23636 out.go:358] Setting ErrFile to fd 2...
I0906 18:47:48.320968   23636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:48.321425   23636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
I0906 18:47:48.322335   23636 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:48.322445   23636 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:48.322982   23636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:48.323055   23636 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:48.337432   23636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35021
I0906 18:47:48.337835   23636 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:48.338341   23636 main.go:141] libmachine: Using API Version  1
I0906 18:47:48.338363   23636 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:48.338665   23636 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:48.338859   23636 main.go:141] libmachine: (functional-745007) Calling .GetState
I0906 18:47:48.340865   23636 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:48.340912   23636 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:48.354788   23636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
I0906 18:47:48.355218   23636 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:48.355697   23636 main.go:141] libmachine: Using API Version  1
I0906 18:47:48.355717   23636 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:48.356016   23636 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:48.356196   23636 main.go:141] libmachine: (functional-745007) Calling .DriverName
I0906 18:47:48.356413   23636 ssh_runner.go:195] Run: systemctl --version
I0906 18:47:48.356446   23636 main.go:141] libmachine: (functional-745007) Calling .GetSSHHostname
I0906 18:47:48.359083   23636 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:48.359519   23636 main.go:141] libmachine: (functional-745007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:1a", ip: ""} in network mk-functional-745007: {Iface:virbr1 ExpiryTime:2024-09-06 19:44:39 +0000 UTC Type:0 Mac:52:54:00:4c:20:1a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-745007 Clientid:01:52:54:00:4c:20:1a}
I0906 18:47:48.359554   23636 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined IP address 192.168.39.170 and MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:48.359683   23636 main.go:141] libmachine: (functional-745007) Calling .GetSSHPort
I0906 18:47:48.359840   23636 main.go:141] libmachine: (functional-745007) Calling .GetSSHKeyPath
I0906 18:47:48.359969   23636 main.go:141] libmachine: (functional-745007) Calling .GetSSHUsername
I0906 18:47:48.360074   23636 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/functional-745007/id_rsa Username:docker}
I0906 18:47:48.441251   23636 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 18:47:48.483490   23636 main.go:141] libmachine: Making call to close driver server
I0906 18:47:48.483502   23636 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:48.483803   23636 main.go:141] libmachine: (functional-745007) DBG | Closing plugin on server side
I0906 18:47:48.483859   23636 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:48.483880   23636 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:48.483899   23636 main.go:141] libmachine: Making call to close driver server
I0906 18:47:48.483910   23636 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:48.484181   23636 main.go:141] libmachine: (functional-745007) DBG | Closing plugin on server side
I0906 18:47:48.484199   23636 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:48.484228   23636 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh pgrep buildkitd: exit status 1 (188.680609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image build -t localhost/my-image:functional-745007 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-745007 image build -t localhost/my-image:functional-745007 testdata/build --alsologtostderr: (3.344060429s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-745007 image build -t localhost/my-image:functional-745007 testdata/build --alsologtostderr:
I0906 18:47:48.721713   23689 out.go:345] Setting OutFile to fd 1 ...
I0906 18:47:48.722038   23689 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:48.722052   23689 out.go:358] Setting ErrFile to fd 2...
I0906 18:47:48.722059   23689 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:47:48.722349   23689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
I0906 18:47:48.723159   23689 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:48.723678   23689 config.go:182] Loaded profile config "functional-745007": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:47:48.724070   23689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:48.724113   23689 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:48.738603   23689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
I0906 18:47:48.739053   23689 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:48.739646   23689 main.go:141] libmachine: Using API Version  1
I0906 18:47:48.739668   23689 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:48.740014   23689 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:48.740219   23689 main.go:141] libmachine: (functional-745007) Calling .GetState
I0906 18:47:48.741989   23689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0906 18:47:48.742022   23689 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:47:48.760941   23689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45999
I0906 18:47:48.761426   23689 main.go:141] libmachine: () Calling .GetVersion
I0906 18:47:48.761896   23689 main.go:141] libmachine: Using API Version  1
I0906 18:47:48.761928   23689 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:47:48.762366   23689 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:47:48.762566   23689 main.go:141] libmachine: (functional-745007) Calling .DriverName
I0906 18:47:48.762784   23689 ssh_runner.go:195] Run: systemctl --version
I0906 18:47:48.762807   23689 main.go:141] libmachine: (functional-745007) Calling .GetSSHHostname
I0906 18:47:48.765578   23689 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:48.765965   23689 main.go:141] libmachine: (functional-745007) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:20:1a", ip: ""} in network mk-functional-745007: {Iface:virbr1 ExpiryTime:2024-09-06 19:44:39 +0000 UTC Type:0 Mac:52:54:00:4c:20:1a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-745007 Clientid:01:52:54:00:4c:20:1a}
I0906 18:47:48.765991   23689 main.go:141] libmachine: (functional-745007) DBG | domain functional-745007 has defined IP address 192.168.39.170 and MAC address 52:54:00:4c:20:1a in network mk-functional-745007
I0906 18:47:48.766141   23689 main.go:141] libmachine: (functional-745007) Calling .GetSSHPort
I0906 18:47:48.766279   23689 main.go:141] libmachine: (functional-745007) Calling .GetSSHKeyPath
I0906 18:47:48.766414   23689 main.go:141] libmachine: (functional-745007) Calling .GetSSHUsername
I0906 18:47:48.766635   23689 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/functional-745007/id_rsa Username:docker}
I0906 18:47:48.850636   23689 build_images.go:161] Building image from path: /tmp/build.1842917707.tar
I0906 18:47:48.850700   23689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 18:47:48.863346   23689 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1842917707.tar
I0906 18:47:48.868915   23689 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1842917707.tar: stat -c "%s %y" /var/lib/minikube/build/build.1842917707.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1842917707.tar': No such file or directory
I0906 18:47:48.868939   23689 ssh_runner.go:362] scp /tmp/build.1842917707.tar --> /var/lib/minikube/build/build.1842917707.tar (3072 bytes)
I0906 18:47:48.901764   23689 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1842917707
I0906 18:47:48.918924   23689 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1842917707 -xf /var/lib/minikube/build/build.1842917707.tar
I0906 18:47:48.932650   23689 docker.go:360] Building image: /var/lib/minikube/build/build.1842917707
I0906 18:47:48.932706   23689 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-745007 /var/lib/minikube/build/build.1842917707
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:72f921b6f4486a28000f0a23ed18be4bc20763cc834922e84351db8062779141
#8 writing image sha256:72f921b6f4486a28000f0a23ed18be4bc20763cc834922e84351db8062779141 done
#8 naming to localhost/my-image:functional-745007 done
#8 DONE 0.1s
I0906 18:47:51.993351   23689 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-745007 /var/lib/minikube/build/build.1842917707: (3.060622226s)
I0906 18:47:51.993414   23689 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1842917707
I0906 18:47:52.006035   23689 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1842917707.tar
I0906 18:47:52.017183   23689 build_images.go:217] Built localhost/my-image:functional-745007 from /tmp/build.1842917707.tar
I0906 18:47:52.017207   23689 build_images.go:133] succeeded building to: functional-745007
I0906 18:47:52.017211   23689 build_images.go:134] failed building to: 
I0906 18:47:52.017231   23689 main.go:141] libmachine: Making call to close driver server
I0906 18:47:52.017240   23689 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:52.017501   23689 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:52.017516   23689 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:47:52.017525   23689 main.go:141] libmachine: Making call to close driver server
I0906 18:47:52.017532   23689 main.go:141] libmachine: (functional-745007) Calling .Close
I0906 18:47:52.017817   23689 main.go:141] libmachine: (functional-745007) DBG | Closing plugin on server side
I0906 18:47:52.017838   23689 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:47:52.017868   23689 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.506079912s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-745007
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image load --daemon kicbase/echo-server:functional-745007 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image load --daemon kicbase/echo-server:functional-745007 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-745007
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image load --daemon kicbase/echo-server:functional-745007 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image save kicbase/echo-server:functional-745007 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image rm kicbase/echo-server:functional-745007 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-745007
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 image save --daemon kicbase/echo-server:functional-745007 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-745007
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdspecific-port2692772280/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (188.024138ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdspecific-port2692772280/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh "sudo umount -f /mount-9p": exit status 1 (232.538059ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-745007 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdspecific-port2692772280/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-745007 docker-env) && out/minikube-linux-amd64 status -p functional-745007"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-745007 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 service list -o json
functional_test.go:1494: Took "305.450862ms" to run "out/minikube-linux-amd64 -p functional-745007 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup741786114/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup741786114/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup741786114/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T" /mount1: exit status 1 (261.276846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-745007 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup741786114/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup741786114/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-745007 /tmp/TestFunctionalparallelMountCmdVerifyCleanup741786114/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.170:32134
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-745007 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.170:32134
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-745007
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-745007
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-745007
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (243.07s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-767932 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0906 19:28:06.742191   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-767932 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m5.693538454s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-767932 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-767932 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.508596316s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-767932 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-767932 addons enable gvisor: (3.598919902s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [6cc0833c-40c7-4829-9d62-274393f7fdb8] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.009064405s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-767932 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [d900b7dc-8c75-46d3-b042-e319417abd79] Pending
helpers_test.go:344: "nginx-gvisor" [d900b7dc-8c75-46d3-b042-e319417abd79] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [d900b7dc-8c75-46d3-b042-e319417abd79] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 29.005460844s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-767932
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-767932: (7.296511594s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-767932 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-767932 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (34.587242958s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [6cc0833c-40c7-4829-9d62-274393f7fdb8] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005081861s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [d900b7dc-8c75-46d3-b042-e319417abd79] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004895954s
helpers_test.go:175: Cleaning up "gvisor-767932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-767932
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-767932: (1.085093148s)
--- PASS: TestGvisorAddon (243.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (219.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-056304 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0906 18:48:27.237674   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:47.719659   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:49:28.681104   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:50:50.603166   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-056304 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m38.713186551s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (219.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-056304 -- rollout status deployment/busybox: (4.264873408s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-2sqhr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-gdghf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-z9x9x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-2sqhr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-gdghf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-z9x9x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-2sqhr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-gdghf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-z9x9x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-2sqhr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-2sqhr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-gdghf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-gdghf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-z9x9x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-056304 -- exec busybox-7dff88458-z9x9x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-056304 -v=7 --alsologtostderr
E0906 18:52:27.747036   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:27.753495   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:27.764908   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:27.786379   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:27.827818   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:27.909320   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:28.071013   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:28.392769   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:29.034768   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:30.316254   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:32.878111   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:37.999824   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:48.241476   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:53:06.741823   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:53:08.723040   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-056304 -v=7 --alsologtostderr: (1m2.411560297s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-056304 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp testdata/cp-test.txt ha-056304:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3823411209/001/cp-test_ha-056304.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304:/home/docker/cp-test.txt ha-056304-m02:/home/docker/cp-test_ha-056304_ha-056304-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test_ha-056304_ha-056304-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304:/home/docker/cp-test.txt ha-056304-m03:/home/docker/cp-test_ha-056304_ha-056304-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test_ha-056304_ha-056304-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304:/home/docker/cp-test.txt ha-056304-m04:/home/docker/cp-test_ha-056304_ha-056304-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test_ha-056304_ha-056304-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp testdata/cp-test.txt ha-056304-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3823411209/001/cp-test_ha-056304-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m02:/home/docker/cp-test.txt ha-056304:/home/docker/cp-test_ha-056304-m02_ha-056304.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test_ha-056304-m02_ha-056304.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m02:/home/docker/cp-test.txt ha-056304-m03:/home/docker/cp-test_ha-056304-m02_ha-056304-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test_ha-056304-m02_ha-056304-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m02:/home/docker/cp-test.txt ha-056304-m04:/home/docker/cp-test_ha-056304-m02_ha-056304-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test_ha-056304-m02_ha-056304-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp testdata/cp-test.txt ha-056304-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3823411209/001/cp-test_ha-056304-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m03:/home/docker/cp-test.txt ha-056304:/home/docker/cp-test_ha-056304-m03_ha-056304.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test_ha-056304-m03_ha-056304.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m03:/home/docker/cp-test.txt ha-056304-m02:/home/docker/cp-test_ha-056304-m03_ha-056304-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test_ha-056304-m03_ha-056304-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m03:/home/docker/cp-test.txt ha-056304-m04:/home/docker/cp-test_ha-056304-m03_ha-056304-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test_ha-056304-m03_ha-056304-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp testdata/cp-test.txt ha-056304-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3823411209/001/cp-test_ha-056304-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m04:/home/docker/cp-test.txt ha-056304:/home/docker/cp-test_ha-056304-m04_ha-056304.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304 "sudo cat /home/docker/cp-test_ha-056304-m04_ha-056304.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m04:/home/docker/cp-test.txt ha-056304-m02:/home/docker/cp-test_ha-056304-m04_ha-056304-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m02 "sudo cat /home/docker/cp-test_ha-056304-m04_ha-056304-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 cp ha-056304-m04:/home/docker/cp-test.txt ha-056304-m03:/home/docker/cp-test_ha-056304-m04_ha-056304-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 ssh -n ha-056304-m03 "sudo cat /home/docker/cp-test_ha-056304-m04_ha-056304-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 node stop m02 -v=7 --alsologtostderr
E0906 18:53:34.445503   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-056304 node stop m02 -v=7 --alsologtostderr: (12.561602722s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr: exit status 7 (609.260275ms)

                                                
                                                
-- stdout --
	ha-056304
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-056304-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056304-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-056304-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:53:41.586205   28437 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:53:41.586465   28437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:53:41.586475   28437 out.go:358] Setting ErrFile to fd 2...
	I0906 18:53:41.586479   28437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:53:41.586715   28437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 18:53:41.586908   28437 out.go:352] Setting JSON to false
	I0906 18:53:41.586931   28437 mustload.go:65] Loading cluster: ha-056304
	I0906 18:53:41.587045   28437 notify.go:220] Checking for updates...
	I0906 18:53:41.587404   28437 config.go:182] Loaded profile config "ha-056304": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:53:41.587421   28437 status.go:255] checking status of ha-056304 ...
	I0906 18:53:41.587912   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.587966   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.606725   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0906 18:53:41.607192   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.607824   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.607852   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.608189   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.608402   28437 main.go:141] libmachine: (ha-056304) Calling .GetState
	I0906 18:53:41.610033   28437 status.go:330] ha-056304 host status = "Running" (err=<nil>)
	I0906 18:53:41.610049   28437 host.go:66] Checking if "ha-056304" exists ...
	I0906 18:53:41.610317   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.610353   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.625145   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 18:53:41.625504   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.625883   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.625910   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.626169   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.626348   28437 main.go:141] libmachine: (ha-056304) Calling .GetIP
	I0906 18:53:41.628758   28437 main.go:141] libmachine: (ha-056304) DBG | domain ha-056304 has defined MAC address 52:54:00:56:d9:fc in network mk-ha-056304
	I0906 18:53:41.629159   28437 main.go:141] libmachine: (ha-056304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d9:fc", ip: ""} in network mk-ha-056304: {Iface:virbr1 ExpiryTime:2024-09-06 19:48:40 +0000 UTC Type:0 Mac:52:54:00:56:d9:fc Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-056304 Clientid:01:52:54:00:56:d9:fc}
	I0906 18:53:41.629183   28437 main.go:141] libmachine: (ha-056304) DBG | domain ha-056304 has defined IP address 192.168.39.17 and MAC address 52:54:00:56:d9:fc in network mk-ha-056304
	I0906 18:53:41.629348   28437 host.go:66] Checking if "ha-056304" exists ...
	I0906 18:53:41.629772   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.629813   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.644969   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0906 18:53:41.645423   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.645846   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.645869   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.646160   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.646326   28437 main.go:141] libmachine: (ha-056304) Calling .DriverName
	I0906 18:53:41.646485   28437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:53:41.646517   28437 main.go:141] libmachine: (ha-056304) Calling .GetSSHHostname
	I0906 18:53:41.648997   28437 main.go:141] libmachine: (ha-056304) DBG | domain ha-056304 has defined MAC address 52:54:00:56:d9:fc in network mk-ha-056304
	I0906 18:53:41.649406   28437 main.go:141] libmachine: (ha-056304) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:d9:fc", ip: ""} in network mk-ha-056304: {Iface:virbr1 ExpiryTime:2024-09-06 19:48:40 +0000 UTC Type:0 Mac:52:54:00:56:d9:fc Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-056304 Clientid:01:52:54:00:56:d9:fc}
	I0906 18:53:41.649441   28437 main.go:141] libmachine: (ha-056304) DBG | domain ha-056304 has defined IP address 192.168.39.17 and MAC address 52:54:00:56:d9:fc in network mk-ha-056304
	I0906 18:53:41.649567   28437 main.go:141] libmachine: (ha-056304) Calling .GetSSHPort
	I0906 18:53:41.649710   28437 main.go:141] libmachine: (ha-056304) Calling .GetSSHKeyPath
	I0906 18:53:41.649828   28437 main.go:141] libmachine: (ha-056304) Calling .GetSSHUsername
	I0906 18:53:41.649966   28437 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/ha-056304/id_rsa Username:docker}
	I0906 18:53:41.739170   28437 ssh_runner.go:195] Run: systemctl --version
	I0906 18:53:41.745245   28437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:53:41.761608   28437 kubeconfig.go:125] found "ha-056304" server: "https://192.168.39.254:8443"
	I0906 18:53:41.761638   28437 api_server.go:166] Checking apiserver status ...
	I0906 18:53:41.761689   28437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:53:41.775539   28437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W0906 18:53:41.784817   28437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:53:41.784858   28437 ssh_runner.go:195] Run: ls
	I0906 18:53:41.789335   28437 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:53:41.793586   28437 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:53:41.793603   28437 status.go:422] ha-056304 apiserver status = Running (err=<nil>)
	I0906 18:53:41.793612   28437 status.go:257] ha-056304 status: &{Name:ha-056304 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:53:41.793638   28437 status.go:255] checking status of ha-056304-m02 ...
	I0906 18:53:41.793987   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.794025   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.809015   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0906 18:53:41.809370   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.809837   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.809853   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.810129   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.810295   28437 main.go:141] libmachine: (ha-056304-m02) Calling .GetState
	I0906 18:53:41.811853   28437 status.go:330] ha-056304-m02 host status = "Stopped" (err=<nil>)
	I0906 18:53:41.811867   28437 status.go:343] host is not running, skipping remaining checks
	I0906 18:53:41.811875   28437 status.go:257] ha-056304-m02 status: &{Name:ha-056304-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:53:41.811920   28437 status.go:255] checking status of ha-056304-m03 ...
	I0906 18:53:41.812199   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.812240   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.828860   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0906 18:53:41.829305   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.829753   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.829768   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.830097   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.830256   28437 main.go:141] libmachine: (ha-056304-m03) Calling .GetState
	I0906 18:53:41.831942   28437 status.go:330] ha-056304-m03 host status = "Running" (err=<nil>)
	I0906 18:53:41.831958   28437 host.go:66] Checking if "ha-056304-m03" exists ...
	I0906 18:53:41.832348   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.832389   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.846249   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0906 18:53:41.846591   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.847108   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.847135   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.847416   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.847631   28437 main.go:141] libmachine: (ha-056304-m03) Calling .GetIP
	I0906 18:53:41.850253   28437 main.go:141] libmachine: (ha-056304-m03) DBG | domain ha-056304-m03 has defined MAC address 52:54:00:84:df:7b in network mk-ha-056304
	I0906 18:53:41.850640   28437 main.go:141] libmachine: (ha-056304-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:df:7b", ip: ""} in network mk-ha-056304: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:55 +0000 UTC Type:0 Mac:52:54:00:84:df:7b Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-056304-m03 Clientid:01:52:54:00:84:df:7b}
	I0906 18:53:41.850667   28437 main.go:141] libmachine: (ha-056304-m03) DBG | domain ha-056304-m03 has defined IP address 192.168.39.236 and MAC address 52:54:00:84:df:7b in network mk-ha-056304
	I0906 18:53:41.850817   28437 host.go:66] Checking if "ha-056304-m03" exists ...
	I0906 18:53:41.851135   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.851174   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:41.864949   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0906 18:53:41.865316   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:41.865794   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:41.865818   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:41.866112   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:41.866310   28437 main.go:141] libmachine: (ha-056304-m03) Calling .DriverName
	I0906 18:53:41.866441   28437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:53:41.866460   28437 main.go:141] libmachine: (ha-056304-m03) Calling .GetSSHHostname
	I0906 18:53:41.869101   28437 main.go:141] libmachine: (ha-056304-m03) DBG | domain ha-056304-m03 has defined MAC address 52:54:00:84:df:7b in network mk-ha-056304
	I0906 18:53:41.869493   28437 main.go:141] libmachine: (ha-056304-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:df:7b", ip: ""} in network mk-ha-056304: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:55 +0000 UTC Type:0 Mac:52:54:00:84:df:7b Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:ha-056304-m03 Clientid:01:52:54:00:84:df:7b}
	I0906 18:53:41.869521   28437 main.go:141] libmachine: (ha-056304-m03) DBG | domain ha-056304-m03 has defined IP address 192.168.39.236 and MAC address 52:54:00:84:df:7b in network mk-ha-056304
	I0906 18:53:41.869626   28437 main.go:141] libmachine: (ha-056304-m03) Calling .GetSSHPort
	I0906 18:53:41.869769   28437 main.go:141] libmachine: (ha-056304-m03) Calling .GetSSHKeyPath
	I0906 18:53:41.869893   28437 main.go:141] libmachine: (ha-056304-m03) Calling .GetSSHUsername
	I0906 18:53:41.870009   28437 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/ha-056304-m03/id_rsa Username:docker}
	I0906 18:53:41.951336   28437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:53:41.965837   28437 kubeconfig.go:125] found "ha-056304" server: "https://192.168.39.254:8443"
	I0906 18:53:41.965860   28437 api_server.go:166] Checking apiserver status ...
	I0906 18:53:41.965893   28437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:53:41.978854   28437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1831/cgroup
	W0906 18:53:41.989020   28437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1831/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:53:41.989065   28437 ssh_runner.go:195] Run: ls
	I0906 18:53:41.993536   28437 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:53:41.997466   28437 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:53:41.997486   28437 status.go:422] ha-056304-m03 apiserver status = Running (err=<nil>)
	I0906 18:53:41.997494   28437 status.go:257] ha-056304-m03 status: &{Name:ha-056304-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:53:41.997519   28437 status.go:255] checking status of ha-056304-m04 ...
	I0906 18:53:41.997833   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:41.997871   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:42.012540   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0906 18:53:42.012933   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:42.013435   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:42.013454   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:42.013728   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:42.013943   28437 main.go:141] libmachine: (ha-056304-m04) Calling .GetState
	I0906 18:53:42.015415   28437 status.go:330] ha-056304-m04 host status = "Running" (err=<nil>)
	I0906 18:53:42.015448   28437 host.go:66] Checking if "ha-056304-m04" exists ...
	I0906 18:53:42.015785   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:42.015819   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:42.030876   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I0906 18:53:42.031320   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:42.031830   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:42.031852   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:42.032192   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:42.032353   28437 main.go:141] libmachine: (ha-056304-m04) Calling .GetIP
	I0906 18:53:42.035188   28437 main.go:141] libmachine: (ha-056304-m04) DBG | domain ha-056304-m04 has defined MAC address 52:54:00:36:fc:40 in network mk-ha-056304
	I0906 18:53:42.035629   28437 main.go:141] libmachine: (ha-056304-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fc:40", ip: ""} in network mk-ha-056304: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:28 +0000 UTC Type:0 Mac:52:54:00:36:fc:40 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-056304-m04 Clientid:01:52:54:00:36:fc:40}
	I0906 18:53:42.035657   28437 main.go:141] libmachine: (ha-056304-m04) DBG | domain ha-056304-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:36:fc:40 in network mk-ha-056304
	I0906 18:53:42.035722   28437 host.go:66] Checking if "ha-056304-m04" exists ...
	I0906 18:53:42.036125   28437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 18:53:42.036170   28437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:42.051208   28437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46643
	I0906 18:53:42.051671   28437 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:42.052164   28437 main.go:141] libmachine: Using API Version  1
	I0906 18:53:42.052198   28437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:42.052536   28437 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:42.052688   28437 main.go:141] libmachine: (ha-056304-m04) Calling .DriverName
	I0906 18:53:42.052847   28437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:53:42.052871   28437 main.go:141] libmachine: (ha-056304-m04) Calling .GetSSHHostname
	I0906 18:53:42.055560   28437 main.go:141] libmachine: (ha-056304-m04) DBG | domain ha-056304-m04 has defined MAC address 52:54:00:36:fc:40 in network mk-ha-056304
	I0906 18:53:42.055961   28437 main.go:141] libmachine: (ha-056304-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:fc:40", ip: ""} in network mk-ha-056304: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:28 +0000 UTC Type:0 Mac:52:54:00:36:fc:40 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-056304-m04 Clientid:01:52:54:00:36:fc:40}
	I0906 18:53:42.055994   28437 main.go:141] libmachine: (ha-056304-m04) DBG | domain ha-056304-m04 has defined IP address 192.168.39.74 and MAC address 52:54:00:36:fc:40 in network mk-ha-056304
	I0906 18:53:42.056141   28437 main.go:141] libmachine: (ha-056304-m04) Calling .GetSSHPort
	I0906 18:53:42.056309   28437 main.go:141] libmachine: (ha-056304-m04) Calling .GetSSHKeyPath
	I0906 18:53:42.056458   28437 main.go:141] libmachine: (ha-056304-m04) Calling .GetSSHUsername
	I0906 18:53:42.056579   28437 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/ha-056304-m04/id_rsa Username:docker}
	I0906 18:53:42.138497   28437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:53:42.154166   28437 status.go:257] ha-056304-m04 status: &{Name:ha-056304-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (158.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 node start m02 -v=7 --alsologtostderr
E0906 18:53:49.685175   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:55:11.609373   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-056304 node start m02 -v=7 --alsologtostderr: (2m37.89613453s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (158.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-056304 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-056304 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-056304 -v=7 --alsologtostderr: (42.356487281s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-056304 --wait=true -v=7 --alsologtostderr
E0906 18:57:27.747485   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:57:55.451698   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:58:06.741728   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-056304 --wait=true -v=7 --alsologtostderr: (3m7.845673209s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-056304
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-056304 node delete m03 -v=7 --alsologtostderr: (6.293827883s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-056304 stop -v=7 --alsologtostderr: (38.993445468s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr: exit status 7 (97.881317ms)

                                                
                                                
-- stdout --
	ha-056304
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056304-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056304-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:00:58.509557   31210 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:00:58.509818   31210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:58.509829   31210 out.go:358] Setting ErrFile to fd 2...
	I0906 19:00:58.509833   31210 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:58.510032   31210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 19:00:58.510250   31210 out.go:352] Setting JSON to false
	I0906 19:00:58.510279   31210 mustload.go:65] Loading cluster: ha-056304
	I0906 19:00:58.510378   31210 notify.go:220] Checking for updates...
	I0906 19:00:58.510722   31210 config.go:182] Loaded profile config "ha-056304": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 19:00:58.510739   31210 status.go:255] checking status of ha-056304 ...
	I0906 19:00:58.511185   31210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:00:58.511329   31210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:58.529688   31210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0906 19:00:58.530053   31210 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:58.530570   31210 main.go:141] libmachine: Using API Version  1
	I0906 19:00:58.530615   31210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:58.530905   31210 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:58.531102   31210 main.go:141] libmachine: (ha-056304) Calling .GetState
	I0906 19:00:58.532497   31210 status.go:330] ha-056304 host status = "Stopped" (err=<nil>)
	I0906 19:00:58.532511   31210 status.go:343] host is not running, skipping remaining checks
	I0906 19:00:58.532519   31210 status.go:257] ha-056304 status: &{Name:ha-056304 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:00:58.532542   31210 status.go:255] checking status of ha-056304-m02 ...
	I0906 19:00:58.532848   31210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:00:58.532881   31210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:58.547658   31210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0906 19:00:58.548012   31210 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:58.548541   31210 main.go:141] libmachine: Using API Version  1
	I0906 19:00:58.548562   31210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:58.548869   31210 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:58.549031   31210 main.go:141] libmachine: (ha-056304-m02) Calling .GetState
	I0906 19:00:58.550314   31210 status.go:330] ha-056304-m02 host status = "Stopped" (err=<nil>)
	I0906 19:00:58.550324   31210 status.go:343] host is not running, skipping remaining checks
	I0906 19:00:58.550329   31210 status.go:257] ha-056304-m02 status: &{Name:ha-056304-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:00:58.550345   31210 status.go:255] checking status of ha-056304-m04 ...
	I0906 19:00:58.550625   31210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:00:58.550675   31210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:58.564160   31210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0906 19:00:58.564477   31210 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:58.564821   31210 main.go:141] libmachine: Using API Version  1
	I0906 19:00:58.564837   31210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:58.565139   31210 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:58.565286   31210 main.go:141] libmachine: (ha-056304-m04) Calling .GetState
	I0906 19:00:58.566670   31210 status.go:330] ha-056304-m04 host status = "Stopped" (err=<nil>)
	I0906 19:00:58.566684   31210 status.go:343] host is not running, skipping remaining checks
	I0906 19:00:58.566693   31210 status.go:257] ha-056304-m04 status: &{Name:ha-056304-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (127.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-056304 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0906 19:02:27.747635   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-056304 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m7.064884059s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (127.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-056304 --control-plane -v=7 --alsologtostderr
E0906 19:03:06.742044   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:04:29.807441   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-056304 --control-plane -v=7 --alsologtostderr: (1m32.599360501s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-056304 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-022392 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-022392 --driver=kvm2 : (51.839349171s)
--- PASS: TestImageBuild/serial/Setup (51.84s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-022392
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-022392: (2.108307489s)
--- PASS: TestImageBuild/serial/NormalBuild (2.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-022392
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-022392: (1.184234553s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-022392
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-022392: (1.070487676s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-022392
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-704869 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-704869 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (59.878626958s)
--- PASS: TestJSONOutput/start/Command (59.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-704869 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-704869 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-704869 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-704869 --output=json --user=testUser: (13.353770341s)
--- PASS: TestJSONOutput/stop/Command (13.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-078472 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-078472 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.333187ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0746a00e-6631-4992-95f9-312570f0b2bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-078472] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67f00630-9ffc-476f-84ae-ff274d7c8c6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"a6812785-9205-4816-a17b-7cedadc591a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"438b1691-d9fc-4c9a-8c97-57b1fb196456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig"}}
	{"specversion":"1.0","id":"0fbf492b-0e64-4ec5-8dff-7b0e0bb2943b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube"}}
	{"specversion":"1.0","id":"c6bfa50a-bcd8-44ce-8895-c48ce1862f8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6d7cc373-7142-46a2-a569-08a8040492e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87b0966b-ba27-49d7-a422-40013162288b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-078472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-078472
--- PASS: TestErrorJSONOutput (0.17s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (103.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-575119 --driver=kvm2 
E0906 19:07:27.747290   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-575119 --driver=kvm2 : (49.722753392s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-577661 --driver=kvm2 
E0906 19:08:06.743065   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-577661 --driver=kvm2 : (51.415248869s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-575119
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-577661
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-577661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-577661
helpers_test.go:175: Cleaning up "first-575119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-575119
--- PASS: TestMinikubeProfile (103.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-001787 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0906 19:08:50.815559   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-001787 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.533132832s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-001787 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-001787 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (33.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-019513 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-019513 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (32.664339936s)
--- PASS: TestMountStart/serial/StartWithMountSecond (33.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019513 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019513 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-001787 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019513 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019513 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-019513
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-019513: (2.2714267s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-019513
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-019513: (25.854871983s)
--- PASS: TestMountStart/serial/RestartStopped (26.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019513 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019513 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-556652 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-556652 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m7.981515719s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- rollout status deployment/busybox
E0906 19:12:27.747397   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-556652 -- rollout status deployment/busybox: (3.644607928s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mrm8z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mt7j6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mrm8z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mt7j6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mrm8z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mt7j6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mrm8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mrm8z -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mt7j6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-556652 -- exec busybox-7dff88458-mt7j6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-556652 -v 3 --alsologtostderr
E0906 19:13:06.741529   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-556652 -v 3 --alsologtostderr: (58.199677772s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-556652 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp testdata/cp-test.txt multinode-556652:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3630363695/001/cp-test_multinode-556652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652:/home/docker/cp-test.txt multinode-556652-m02:/home/docker/cp-test_multinode-556652_multinode-556652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m02 "sudo cat /home/docker/cp-test_multinode-556652_multinode-556652-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652:/home/docker/cp-test.txt multinode-556652-m03:/home/docker/cp-test_multinode-556652_multinode-556652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m03 "sudo cat /home/docker/cp-test_multinode-556652_multinode-556652-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp testdata/cp-test.txt multinode-556652-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3630363695/001/cp-test_multinode-556652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652-m02:/home/docker/cp-test.txt multinode-556652:/home/docker/cp-test_multinode-556652-m02_multinode-556652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652 "sudo cat /home/docker/cp-test_multinode-556652-m02_multinode-556652.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652-m02:/home/docker/cp-test.txt multinode-556652-m03:/home/docker/cp-test_multinode-556652-m02_multinode-556652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m03 "sudo cat /home/docker/cp-test_multinode-556652-m02_multinode-556652-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp testdata/cp-test.txt multinode-556652-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3630363695/001/cp-test_multinode-556652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652-m03:/home/docker/cp-test.txt multinode-556652:/home/docker/cp-test_multinode-556652-m03_multinode-556652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652 "sudo cat /home/docker/cp-test_multinode-556652-m03_multinode-556652.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 cp multinode-556652-m03:/home/docker/cp-test.txt multinode-556652-m02:/home/docker/cp-test_multinode-556652-m03_multinode-556652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 ssh -n multinode-556652-m02 "sudo cat /home/docker/cp-test_multinode-556652-m03_multinode-556652-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-556652 node stop m03: (2.555136503s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-556652 status: exit status 7 (402.710193ms)

                                                
                                                
-- stdout --
	multinode-556652
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-556652-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-556652-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr: exit status 7 (400.894408ms)

                                                
                                                
-- stdout --
	multinode-556652
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-556652-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-556652-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:13:41.930895   39582 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:13:41.931172   39582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:13:41.931182   39582 out.go:358] Setting ErrFile to fd 2...
	I0906 19:13:41.931186   39582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:13:41.931381   39582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 19:13:41.931524   39582 out.go:352] Setting JSON to false
	I0906 19:13:41.931544   39582 mustload.go:65] Loading cluster: multinode-556652
	I0906 19:13:41.931664   39582 notify.go:220] Checking for updates...
	I0906 19:13:41.931891   39582 config.go:182] Loaded profile config "multinode-556652": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 19:13:41.931903   39582 status.go:255] checking status of multinode-556652 ...
	I0906 19:13:41.932246   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:41.932312   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:41.953476   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0906 19:13:41.953995   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:41.954609   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:41.954631   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:41.954966   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:41.955135   39582 main.go:141] libmachine: (multinode-556652) Calling .GetState
	I0906 19:13:41.956722   39582 status.go:330] multinode-556652 host status = "Running" (err=<nil>)
	I0906 19:13:41.956737   39582 host.go:66] Checking if "multinode-556652" exists ...
	I0906 19:13:41.957009   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:41.957039   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:41.971766   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0906 19:13:41.972173   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:41.972613   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:41.972630   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:41.972920   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:41.973095   39582 main.go:141] libmachine: (multinode-556652) Calling .GetIP
	I0906 19:13:41.975526   39582 main.go:141] libmachine: (multinode-556652) DBG | domain multinode-556652 has defined MAC address 52:54:00:a0:fe:72 in network mk-multinode-556652
	I0906 19:13:41.975906   39582 main.go:141] libmachine: (multinode-556652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:fe:72", ip: ""} in network mk-multinode-556652: {Iface:virbr1 ExpiryTime:2024-09-06 20:10:33 +0000 UTC Type:0 Mac:52:54:00:a0:fe:72 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-556652 Clientid:01:52:54:00:a0:fe:72}
	I0906 19:13:41.975926   39582 main.go:141] libmachine: (multinode-556652) DBG | domain multinode-556652 has defined IP address 192.168.39.156 and MAC address 52:54:00:a0:fe:72 in network mk-multinode-556652
	I0906 19:13:41.976054   39582 host.go:66] Checking if "multinode-556652" exists ...
	I0906 19:13:41.976310   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:41.976350   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:41.990827   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I0906 19:13:41.991192   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:41.991628   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:41.991659   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:41.991962   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:41.992143   39582 main.go:141] libmachine: (multinode-556652) Calling .DriverName
	I0906 19:13:41.992324   39582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:13:41.992340   39582 main.go:141] libmachine: (multinode-556652) Calling .GetSSHHostname
	I0906 19:13:41.994907   39582 main.go:141] libmachine: (multinode-556652) DBG | domain multinode-556652 has defined MAC address 52:54:00:a0:fe:72 in network mk-multinode-556652
	I0906 19:13:41.995330   39582 main.go:141] libmachine: (multinode-556652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:fe:72", ip: ""} in network mk-multinode-556652: {Iface:virbr1 ExpiryTime:2024-09-06 20:10:33 +0000 UTC Type:0 Mac:52:54:00:a0:fe:72 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-556652 Clientid:01:52:54:00:a0:fe:72}
	I0906 19:13:41.995357   39582 main.go:141] libmachine: (multinode-556652) DBG | domain multinode-556652 has defined IP address 192.168.39.156 and MAC address 52:54:00:a0:fe:72 in network mk-multinode-556652
	I0906 19:13:41.995498   39582 main.go:141] libmachine: (multinode-556652) Calling .GetSSHPort
	I0906 19:13:41.995656   39582 main.go:141] libmachine: (multinode-556652) Calling .GetSSHKeyPath
	I0906 19:13:41.995805   39582 main.go:141] libmachine: (multinode-556652) Calling .GetSSHUsername
	I0906 19:13:41.995954   39582 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/multinode-556652/id_rsa Username:docker}
	I0906 19:13:42.074592   39582 ssh_runner.go:195] Run: systemctl --version
	I0906 19:13:42.080619   39582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:13:42.094428   39582 kubeconfig.go:125] found "multinode-556652" server: "https://192.168.39.156:8443"
	I0906 19:13:42.094458   39582 api_server.go:166] Checking apiserver status ...
	I0906 19:13:42.094502   39582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:13:42.107386   39582 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1802/cgroup
	W0906 19:13:42.116191   39582 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1802/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 19:13:42.116227   39582 ssh_runner.go:195] Run: ls
	I0906 19:13:42.120719   39582 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0906 19:13:42.125345   39582 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0906 19:13:42.125361   39582 status.go:422] multinode-556652 apiserver status = Running (err=<nil>)
	I0906 19:13:42.125370   39582 status.go:257] multinode-556652 status: &{Name:multinode-556652 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:13:42.125393   39582 status.go:255] checking status of multinode-556652-m02 ...
	I0906 19:13:42.125661   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:42.125690   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:42.141482   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0906 19:13:42.141895   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:42.142284   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:42.142308   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:42.142624   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:42.142818   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .GetState
	I0906 19:13:42.144417   39582 status.go:330] multinode-556652-m02 host status = "Running" (err=<nil>)
	I0906 19:13:42.144434   39582 host.go:66] Checking if "multinode-556652-m02" exists ...
	I0906 19:13:42.144709   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:42.144743   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:42.158915   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0906 19:13:42.159293   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:42.159717   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:42.159734   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:42.160072   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:42.160247   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .GetIP
	I0906 19:13:42.162896   39582 main.go:141] libmachine: (multinode-556652-m02) DBG | domain multinode-556652-m02 has defined MAC address 52:54:00:e5:a2:3a in network mk-multinode-556652
	I0906 19:13:42.163328   39582 main.go:141] libmachine: (multinode-556652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a2:3a", ip: ""} in network mk-multinode-556652: {Iface:virbr1 ExpiryTime:2024-09-06 20:11:43 +0000 UTC Type:0 Mac:52:54:00:e5:a2:3a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-556652-m02 Clientid:01:52:54:00:e5:a2:3a}
	I0906 19:13:42.163348   39582 main.go:141] libmachine: (multinode-556652-m02) DBG | domain multinode-556652-m02 has defined IP address 192.168.39.77 and MAC address 52:54:00:e5:a2:3a in network mk-multinode-556652
	I0906 19:13:42.163507   39582 host.go:66] Checking if "multinode-556652-m02" exists ...
	I0906 19:13:42.163805   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:42.163844   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:42.177569   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46427
	I0906 19:13:42.177940   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:42.178378   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:42.178399   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:42.178658   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:42.178809   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .DriverName
	I0906 19:13:42.179009   39582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:13:42.179031   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .GetSSHHostname
	I0906 19:13:42.181572   39582 main.go:141] libmachine: (multinode-556652-m02) DBG | domain multinode-556652-m02 has defined MAC address 52:54:00:e5:a2:3a in network mk-multinode-556652
	I0906 19:13:42.181976   39582 main.go:141] libmachine: (multinode-556652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:a2:3a", ip: ""} in network mk-multinode-556652: {Iface:virbr1 ExpiryTime:2024-09-06 20:11:43 +0000 UTC Type:0 Mac:52:54:00:e5:a2:3a Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-556652-m02 Clientid:01:52:54:00:e5:a2:3a}
	I0906 19:13:42.182006   39582 main.go:141] libmachine: (multinode-556652-m02) DBG | domain multinode-556652-m02 has defined IP address 192.168.39.77 and MAC address 52:54:00:e5:a2:3a in network mk-multinode-556652
	I0906 19:13:42.182134   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .GetSSHPort
	I0906 19:13:42.182290   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .GetSSHKeyPath
	I0906 19:13:42.182464   39582 main.go:141] libmachine: (multinode-556652-m02) Calling .GetSSHUsername
	I0906 19:13:42.182593   39582 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6054/.minikube/machines/multinode-556652-m02/id_rsa Username:docker}
	I0906 19:13:42.258501   39582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:13:42.273635   39582 status.go:257] multinode-556652-m02 status: &{Name:multinode-556652-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:13:42.273666   39582 status.go:255] checking status of multinode-556652-m03 ...
	I0906 19:13:42.274069   39582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:13:42.274114   39582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:13:42.290121   39582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42221
	I0906 19:13:42.290536   39582 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:13:42.291037   39582 main.go:141] libmachine: Using API Version  1
	I0906 19:13:42.291072   39582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:13:42.291382   39582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:13:42.291574   39582 main.go:141] libmachine: (multinode-556652-m03) Calling .GetState
	I0906 19:13:42.293047   39582 status.go:330] multinode-556652-m03 host status = "Stopped" (err=<nil>)
	I0906 19:13:42.293059   39582 status.go:343] host is not running, skipping remaining checks
	I0906 19:13:42.293065   39582 status.go:257] multinode-556652-m03 status: &{Name:multinode-556652-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-556652 node start m03 -v=7 --alsologtostderr: (41.624151051s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (193.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-556652
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-556652
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-556652: (27.33035799s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-556652 --wait=true -v=8 --alsologtostderr
E0906 19:17:27.747628   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-556652 --wait=true -v=8 --alsologtostderr: (2m46.008607575s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-556652
--- PASS: TestMultiNode/serial/RestartKeepsNodes (193.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-556652 node delete m03: (1.670975178s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-556652 stop: (24.91470161s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-556652 status: exit status 7 (80.083739ms)

                                                
                                                
-- stdout --
	multinode-556652
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-556652-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr: exit status 7 (82.591873ms)

                                                
                                                
-- stdout --
	multinode-556652
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-556652-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:18:05.138622   41846 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:18:05.138863   41846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:18:05.138871   41846 out.go:358] Setting ErrFile to fd 2...
	I0906 19:18:05.138875   41846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:18:05.139068   41846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6054/.minikube/bin
	I0906 19:18:05.139229   41846 out.go:352] Setting JSON to false
	I0906 19:18:05.139252   41846 mustload.go:65] Loading cluster: multinode-556652
	I0906 19:18:05.139352   41846 notify.go:220] Checking for updates...
	I0906 19:18:05.139599   41846 config.go:182] Loaded profile config "multinode-556652": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 19:18:05.139615   41846 status.go:255] checking status of multinode-556652 ...
	I0906 19:18:05.139995   41846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:18:05.140068   41846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:18:05.159332   41846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0906 19:18:05.159807   41846 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:18:05.160475   41846 main.go:141] libmachine: Using API Version  1
	I0906 19:18:05.160517   41846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:18:05.160875   41846 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:18:05.161098   41846 main.go:141] libmachine: (multinode-556652) Calling .GetState
	I0906 19:18:05.162749   41846 status.go:330] multinode-556652 host status = "Stopped" (err=<nil>)
	I0906 19:18:05.162764   41846 status.go:343] host is not running, skipping remaining checks
	I0906 19:18:05.162772   41846 status.go:257] multinode-556652 status: &{Name:multinode-556652 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:18:05.162813   41846 status.go:255] checking status of multinode-556652-m02 ...
	I0906 19:18:05.163126   41846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0906 19:18:05.163171   41846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:18:05.177825   41846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36949
	I0906 19:18:05.178260   41846 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:18:05.178757   41846 main.go:141] libmachine: Using API Version  1
	I0906 19:18:05.178795   41846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:18:05.179114   41846 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:18:05.179285   41846 main.go:141] libmachine: (multinode-556652-m02) Calling .GetState
	I0906 19:18:05.180711   41846 status.go:330] multinode-556652-m02 host status = "Stopped" (err=<nil>)
	I0906 19:18:05.180727   41846 status.go:343] host is not running, skipping remaining checks
	I0906 19:18:05.180734   41846 status.go:257] multinode-556652-m02 status: &{Name:multinode-556652-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (102.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-556652 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0906 19:18:06.742441   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-556652 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m41.846547826s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-556652 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (102.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-556652
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-556652-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-556652-m02 --driver=kvm2 : exit status 14 (56.308229ms)

                                                
                                                
-- stdout --
	* [multinode-556652-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-556652-m02' is duplicated with machine name 'multinode-556652-m02' in profile 'multinode-556652'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-556652-m03 --driver=kvm2 
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-556652-m03 --driver=kvm2 : (50.048730898s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-556652
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-556652: exit status 80 (198.40691ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-556652 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-556652-m03 already exists in multinode-556652-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-556652-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.12s)

                                                
                                    
x
+
TestPreload (187.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-194861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0906 19:21:09.809052   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:22:27.747778   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-194861 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m59.725322568s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-194861 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-194861 image pull gcr.io/k8s-minikube/busybox: (1.590997617s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-194861
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-194861: (12.529657749s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-194861 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0906 19:23:06.742654   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-194861 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (52.454813043s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-194861 image list
helpers_test.go:175: Cleaning up "test-preload-194861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-194861
--- PASS: TestPreload (187.35s)

                                                
                                    
x
+
TestScheduledStopUnix (123.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-345332 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-345332 --memory=2048 --driver=kvm2 : (52.442911471s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345332 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-345332 -n scheduled-stop-345332
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345332 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345332 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-345332 -n scheduled-stop-345332
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-345332
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-345332 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0906 19:25:30.818778   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-345332
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-345332: exit status 7 (61.431376ms)

                                                
                                                
-- stdout --
	scheduled-stop-345332
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-345332 -n scheduled-stop-345332
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-345332 -n scheduled-stop-345332: exit status 7 (57.972087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-345332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-345332
--- PASS: TestScheduledStopUnix (123.93s)

                                                
                                    
x
+
TestSkaffold (129.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1765833680 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-017275 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-017275 --memory=2600 --driver=kvm2 : (49.664291678s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1765833680 run --minikube-profile skaffold-017275 --kube-context skaffold-017275 --status-check=true --port-forward=false --interactive=false
E0906 19:27:27.747213   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1765833680 run --minikube-profile skaffold-017275 --kube-context skaffold-017275 --status-check=true --port-forward=false --interactive=false: (1m7.095156477s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f8f48dd5-jcvkr" [64fa40d4-4fdd-4956-9f96-c9e4320580a5] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003998112s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-65c87746b6-bpb8p" [9145fd21-65cd-4895-b20f-8f4bbca82839] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004102475s
helpers_test.go:175: Cleaning up "skaffold-017275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-017275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-017275: (1.19593241s)
--- PASS: TestSkaffold (129.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2486169081 start -p running-upgrade-091330 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2486169081 start -p running-upgrade-091330 --memory=2200 --vm-driver=kvm2 : (1m38.249391418s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-091330 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-091330 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (35.675104452s)
helpers_test.go:175: Cleaning up "running-upgrade-091330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-091330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-091330: (1.427892934s)
--- PASS: TestRunningBinaryUpgrade (135.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (214s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m5.786271428s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-374777
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-374777: (12.509331136s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-374777 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-374777 status --format={{.Host}}: exit status 7 (63.335025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
E0906 19:32:27.747232   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (1m13.358530252s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-374777 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (78.461014ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-374777] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-374777
	    minikube start -p kubernetes-upgrade-374777 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3747772 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-374777 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
E0906 19:33:30.141592   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-374777 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (1m0.949085314s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-374777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-374777
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-374777: (1.19301591s)
--- PASS: TestKubernetesUpgrade (214.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3764045503 start -p stopped-upgrade-362252 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3764045503 start -p stopped-upgrade-362252 --memory=2200 --vm-driver=kvm2 : (1m0.125950623s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3764045503 -p stopped-upgrade-362252 stop
E0906 19:32:54.295802   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:32:59.417346   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:33:06.741514   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3764045503 -p stopped-upgrade-362252 stop: (13.165387939s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-362252 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0906 19:33:09.659325   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-362252 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (46.318378357s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.61s)

                                                
                                    
x
+
TestPause/serial/Start (79.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-737076 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-737076 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m19.60271849s)
--- PASS: TestPause/serial/Start (79.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-362252
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917741 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-917741 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (55.903809ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-917741] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (91.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917741 --driver=kvm2 
E0906 19:34:11.103329   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917741 --driver=kvm2 : (1m30.854608095s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-917741 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (91.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (119.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m59.821574035s)
--- PASS: TestNetworkPlugins/group/auto/Start (119.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (125.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (2m5.808455978s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (125.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (108.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-737076 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-737076 --alsologtostderr -v=1 --driver=kvm2 : (1m48.932064681s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (108.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917741 --no-kubernetes --driver=kvm2 
E0906 19:35:33.025369   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.301127   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.307526   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.318873   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.340212   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.381605   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.463054   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.624604   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:38.946324   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:39.588275   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:40.870527   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:43.431820   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:48.553450   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:35:58.794822   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917741 --no-kubernetes --driver=kvm2 : (32.694168435s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-917741 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-917741 status -o json: exit status 2 (222.31377ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-917741","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-917741
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-917741: (1.034058012s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917741 --no-kubernetes --driver=kvm2 
E0906 19:36:19.276981   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917741 --no-kubernetes --driver=kvm2 : (28.879280489s)
--- PASS: TestNoKubernetes/serial/Start (28.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fx8hb" [7590cd40-726a-4057-8a6c-2d47bc105839] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fx8hb" [7590cd40-726a-4057-8a6c-2d47bc105839] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004820675s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-917741 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-917741 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.17566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-917741
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-917741: (2.301923468s)
--- PASS: TestNoKubernetes/serial/Stop (2.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917741 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917741 --driver=kvm2 : (27.636316474s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rmzq7" [ac077aa2-6d0b-4d34-afe2-5f98c19c2ba5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003835956s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rwdm7" [08327449-fcb0-4b41-b874-2de374817516] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rwdm7" [08327449-fcb0-4b41-b874-2de374817516] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004490071s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestPause/serial/Pause (1.11s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-737076 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-737076 --alsologtostderr -v=5: (1.112872891s)
--- PASS: TestPause/serial/Pause (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m30.211346976s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.21s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-737076 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-737076 --output=json --layout=cluster: exit status 2 (257.08438ms)

                                                
                                                
-- stdout --
	{"Name":"pause-737076","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-737076","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-737076 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-737076 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-737076 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-737076 --alsologtostderr -v=5: (1.047135044s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0906 19:37:00.239317   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m39.871698073s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-917741 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-917741 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.904927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (115.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m55.035237084s)
--- PASS: TestNetworkPlugins/group/false/Start (115.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (135.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0906 19:37:27.747204   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.163824   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.811246   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:06.741665   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:16.867429   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:22.160853   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m15.415951313s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (135.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p4nk9" [a2ff2d75-212a-4191-ae38-30441db7f14e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005555686s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pwpd2" [9fd221fe-1281-48e4-a9e6-73c755878e2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pwpd2" [9fd221fe-1281-48e4-a9e6-73c755878e2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.01293653s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vhrk5" [e185474e-3321-48d9-9aa5-1e1d9b51e846] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vhrk5" [e185474e-3321-48d9-9aa5-1e1d9b51e846] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004901514s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tdjs7" [ee3763fc-38a7-4c82-a9c8-53f5919a2cbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tdjs7" [ee3763fc-38a7-4c82-a9c8-53f5919a2cbc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004651515s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m8.808100435s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m28.325865292s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (96.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-008590 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m36.808076865s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (96.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bnzrb" [18e3db7a-6734-4be3-9d1b-91040225c982] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bnzrb" [18e3db7a-6734-4be3-9d1b-91040225c982] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004160632s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (177.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-799682 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-799682 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m57.562308177s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (177.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4d5sm" [19a52422-7f8c-4ddc-bb4b-086a6b556592] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005629292s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5d749" [aa40edf6-29c8-4bcf-b2de-27c3a8061a24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5d749" [aa40edf6-29c8-4bcf-b2de-27c3a8061a24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00531492s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zfr65" [f59214d0-f299-4df7-82b0-87763b365631] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 19:40:38.301369   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zfr65" [f59214d0-f299-4df7-82b0-87763b365631] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005906002s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (111.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-785616 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-785616 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (1m51.720564983s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (111.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-008590 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-008590 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cvjvz" [b597e43b-530e-433e-9855-00cb43c0fe03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cvjvz" [b597e43b-530e-433e-9855-00cb43c0fe03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004382513s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (115.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-804058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
E0906 19:41:06.003176   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-804058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (1m55.216745848s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (115.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-008590 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-008590 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-187530 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0906 19:41:28.724745   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:29.366065   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:30.648218   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:33.209965   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.332238   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.830969   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.837393   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.848764   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.870192   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.911624   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:38.993098   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:39.155208   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:39.477160   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:40.118542   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:41.400685   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:43.962273   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:48.574378   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:49.083583   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:41:59.325755   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:42:09.056273   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:42:10.820151   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:42:19.807968   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:42:27.747569   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-187530 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (1m23.651290888s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-785616 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [de1ad6a9-7dc3-4063-a089-c708a5068439] Pending
helpers_test.go:344: "busybox" [de1ad6a9-7dc3-4063-a089-c708a5068439] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [de1ad6a9-7dc3-4063-a089-c708a5068439] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00511713s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-785616 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-785616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-785616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039576684s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-785616 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-785616 --alsologtostderr -v=3
E0906 19:42:49.164276   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:42:50.017882   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-785616 --alsologtostderr -v=3: (13.419079076s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-799682 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ca5e076-7564-495e-889a-b31a9e362c0a] Pending
helpers_test.go:344: "busybox" [0ca5e076-7564-495e-889a-b31a9e362c0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ca5e076-7564-495e-889a-b31a9e362c0a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003891174s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-799682 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-187530 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd9e19f4-ddad-4905-b38d-38acdfe122af] Pending
helpers_test.go:344: "busybox" [fd9e19f4-ddad-4905-b38d-38acdfe122af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd9e19f4-ddad-4905-b38d-38acdfe122af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005130916s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-187530 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-785616 -n no-preload-785616
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-785616 -n no-preload-785616: exit status 7 (61.873603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-785616 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-785616 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-785616 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (4m59.738025668s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-785616 -n no-preload-785616
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-804058 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a7666352-3e4b-4391-b7c1-b5a326bb4bf6] Pending
helpers_test.go:344: "busybox" [a7666352-3e4b-4391-b7c1-b5a326bb4bf6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a7666352-3e4b-4391-b7c1-b5a326bb4bf6] Running
E0906 19:43:06.742042   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004324948s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-804058 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-799682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-799682 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-187530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0906 19:43:00.770411   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-187530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080457895s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-187530 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-799682 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-799682 --alsologtostderr -v=3: (13.390020165s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-187530 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-187530 --alsologtostderr -v=3: (13.367657553s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-804058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-804058 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-804058 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-804058 --alsologtostderr -v=3: (13.358381158s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799682 -n old-k8s-version-799682
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799682 -n old-k8s-version-799682: exit status 7 (65.623704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-799682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (409.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-799682 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-799682 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m49.385676535s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-799682 -n old-k8s-version-799682
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (409.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530: exit status 7 (61.177605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-187530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-187530 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0906 19:43:24.078106   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.084474   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.095796   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.117147   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.158606   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.240254   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.401787   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:24.723780   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-187530 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (5m32.098040325s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (332.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-804058 -n embed-certs-804058
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-804058 -n embed-certs-804058: exit status 7 (64.238037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-804058 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (339.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-804058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
E0906 19:43:25.365460   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:26.647255   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:29.208978   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:34.331245   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.601974   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.608334   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.619667   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.641004   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.682377   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.763872   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:37.925129   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:38.247400   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:38.889651   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:40.171468   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:42.733214   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:44.573447   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:47.855416   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.465468   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.471871   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.483226   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.504582   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.545957   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.627579   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:57.789244   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:58.097485   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:58.110957   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:58.752871   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:00.034361   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:02.596647   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:05.055278   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:07.718858   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:11.939878   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:17.960737   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:18.579806   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:22.692405   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.077815   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.084267   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.095787   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.117325   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.159327   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.240839   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.402327   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:27.724163   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:28.365565   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:29.647142   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:32.209240   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:37.330822   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:38.442657   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:46.017623   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:47.572099   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:44:59.541286   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:08.053853   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.421326   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.427711   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.439067   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.460419   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.501766   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.583232   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:09.744773   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:10.066491   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:10.707874   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:11.990186   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:14.551912   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:19.404585   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:19.673398   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:29.915045   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.652792   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.659216   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.670607   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.692066   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.733551   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.815030   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:35.976704   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:36.298396   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:36.940661   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:38.222755   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:38.301285   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/gvisor-767932/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:40.784762   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:45.906055   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:49.016128   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:50.397963   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:56.147760   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.575923   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.582297   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.593677   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.615066   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.656405   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.737840   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:02.899465   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:03.221331   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:03.862691   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:05.144212   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:07.706484   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:07.939299   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:12.828282   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:16.629706   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:21.463452   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:23.070549   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:28.078166   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:31.360231   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:38.831574   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:41.326098   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:43.552814   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:55.782020   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/auto-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:57.591820   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:47:06.534327   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kindnet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:47:10.937746   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:47:24.514308   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:47:27.747229   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/functional-745007/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:47:49.163820   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:47:53.282255   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-804058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (5m39.361833694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-804058 -n embed-certs-804058
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (339.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wmglf" [c532ac26-54ff-4639-9d33-5f016fedb464] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004071749s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wmglf" [c532ac26-54ff-4639-9d33-5f016fedb464] Running
E0906 19:48:06.741996   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/addons-009491/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004826836s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-785616 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-785616 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-785616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-785616 -n no-preload-785616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-785616 -n no-preload-785616: exit status 2 (245.628965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-785616 -n no-preload-785616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-785616 -n no-preload-785616: exit status 2 (254.20627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-785616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-785616 -n no-preload-785616
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-785616 -n no-preload-785616
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-901997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0906 19:48:19.513365   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/bridge-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:24.078027   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:37.601832   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:46.435702   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/kubenet-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-901997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (1m0.758852427s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vqbvk" [ad0d0ea5-4153-42e0-913b-d81e5c52f58c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vqbvk" [ad0d0ea5-4153-42e0-913b-d81e5c52f58c] Running
E0906 19:48:51.780841   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/calico-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004950065s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vqbvk" [ad0d0ea5-4153-42e0-913b-d81e5c52f58c] Running
E0906 19:48:57.465411   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/false-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004743904s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-187530 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-187530 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-187530 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530: exit status 2 (240.70727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530: exit status 2 (239.642865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-187530 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-187530 -n default-k8s-diff-port-187530
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ktvqv" [89a92edc-0b10-44db-938f-8583b9832461] Running
E0906 19:49:05.305428   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/custom-flannel-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005360283s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ktvqv" [89a92edc-0b10-44db-938f-8583b9832461] Running
E0906 19:49:12.229244   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/skaffold-017275/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004425904s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-804058 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-901997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-901997 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-901997 --alsologtostderr -v=3: (12.733467209s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-804058 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-804058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-804058 -n embed-certs-804058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-804058 -n embed-certs-804058: exit status 2 (239.960364ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-804058 -n embed-certs-804058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-804058 -n embed-certs-804058: exit status 2 (231.094265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-804058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-804058 -n embed-certs-804058
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-804058 -n embed-certs-804058
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-901997 -n newest-cni-901997
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-901997 -n newest-cni-901997: exit status 7 (61.238321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-901997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-901997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
E0906 19:49:54.779819   13284 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6054/.minikube/profiles/enable-default-cni-008590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-901997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (36.679687642s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-901997 -n newest-cni-901997
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5m7k9" [fd5409ed-9cc4-4634-8a72-ad6bea3e33c9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00365899s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-901997 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-901997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-901997 -n newest-cni-901997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-901997 -n newest-cni-901997: exit status 2 (223.9204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-901997 -n newest-cni-901997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-901997 -n newest-cni-901997: exit status 2 (235.123324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-901997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-901997 -n newest-cni-901997
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-901997 -n newest-cni-901997
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5m7k9" [fd5409ed-9cc4-4634-8a72-ad6bea3e33c9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005600662s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-799682 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-799682 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-799682 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799682 -n old-k8s-version-799682
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799682 -n old-k8s-version-799682: exit status 2 (229.305742ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-799682 -n old-k8s-version-799682
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-799682 -n old-k8s-version-799682: exit status 2 (227.771384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-799682 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-799682 -n old-k8s-version-799682
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-799682 -n old-k8s-version-799682
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.21s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-008590 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-008590" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-008590

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-008590" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-008590"

                                                
                                                
----------------------- debugLogs end: cilium-008590 [took: 2.976371279s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-008590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-008590
--- SKIP: TestNetworkPlugins/group/cilium (3.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-933295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-933295
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard