Test Report: KVM_Linux 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (1/340)

Order failed test Duration
33 TestAddons/parallel/Registry 74.45
x
+
TestAddons/parallel/Registry (74.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.028786ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-fbp5k" [ff9004d3-6b62-4519-a372-a121accd4feb] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003487803s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cpcfm" [8b08dbe7-93bf-426c-bf4d-ed3ea87b2d2f] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004326471s
addons_test.go:338: (dbg) Run:  kubectl --context addons-545460 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-545460 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-545460 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083574796s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-545460 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 ip
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-545460 -n addons-545460
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 logs -n 25: (1.565670833s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-182067                                                                     | download-only-182067 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p download-only-462178                                                                     | download-only-462178 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p download-only-182067                                                                     | download-only-182067 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-504558 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | binary-mirror-504558                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39541                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-504558                                                                     | binary-mirror-504558 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| addons  | enable dashboard -p                                                                         | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | addons-545460                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | addons-545460                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-545460 --wait=true                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:51 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 17:52 UTC | 20 Sep 24 17:52 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | -p addons-545460                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | -p addons-545460                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-545460 ssh cat                                                                       | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | /opt/local-path-provisioner/pvc-e568ad18-8c41-4cde-a7ed-cb91652aed7c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | addons-545460                                                                               |                      |         |         |                     |                     |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-545460 addons                                                                        | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:00 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:00 UTC | 20 Sep 24 18:01 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | addons-545460                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-545460 ssh curl -s                                                                   | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-545460 ip                                                                            | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ip      | addons-545460 ip                                                                            | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	| addons  | addons-545460 addons disable                                                                | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-545460 addons                                                                        | addons-545460        | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:48:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:48:07.825156   83954 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:48:07.825388   83954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:07.825415   83954 out.go:358] Setting ErrFile to fd 2...
	I0920 17:48:07.825421   83954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:07.825623   83954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 17:48:07.826253   83954 out.go:352] Setting JSON to false
	I0920 17:48:07.827124   83954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":5439,"bootTime":1726849049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:48:07.827223   83954 start.go:139] virtualization: kvm guest
	I0920 17:48:07.829213   83954 out.go:177] * [addons-545460] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:48:07.830726   83954 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 17:48:07.830725   83954 notify.go:220] Checking for updates...
	I0920 17:48:07.833190   83954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:48:07.834476   83954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	I0920 17:48:07.835685   83954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 17:48:07.836895   83954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:48:07.838165   83954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:48:07.839424   83954 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:48:07.870627   83954 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 17:48:07.871751   83954 start.go:297] selected driver: kvm2
	I0920 17:48:07.871765   83954 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:48:07.871778   83954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:48:07.872807   83954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:48:07.872894   83954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-76160/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:48:07.887691   83954 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:48:07.887735   83954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:48:07.887985   83954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:48:07.888018   83954 cni.go:84] Creating CNI manager for ""
	I0920 17:48:07.888081   83954 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:48:07.888092   83954 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:48:07.888162   83954 start.go:340] cluster config:
	{Name:addons-545460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-545460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:48:07.888269   83954 iso.go:125] acquiring lock: {Name:mk2228d1b417575d45b5c1ebe8ab98349c7e233e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:48:07.890324   83954 out.go:177] * Starting "addons-545460" primary control-plane node in "addons-545460" cluster
	I0920 17:48:07.891539   83954 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:48:07.891583   83954 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 17:48:07.891600   83954 cache.go:56] Caching tarball of preloaded images
	I0920 17:48:07.891700   83954 preload.go:172] Found /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 17:48:07.891712   83954 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 17:48:07.892063   83954 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/config.json ...
	I0920 17:48:07.892091   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/config.json: {Name:mk4446f030e80bf1f34a5f2b8a835b16225a9020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:07.892274   83954 start.go:360] acquireMachinesLock for addons-545460: {Name:mk4647a7dcd767168fc8fb46a2e772339e4a178f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:48:07.892339   83954 start.go:364] duration metric: took 48.132µs to acquireMachinesLock for "addons-545460"
	I0920 17:48:07.892360   83954 start.go:93] Provisioning new machine with config: &{Name:addons-545460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-545460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:48:07.892431   83954 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 17:48:07.894017   83954 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 17:48:07.894146   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:48:07.894195   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:48:07.907866   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0920 17:48:07.908329   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:48:07.908925   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:48:07.908944   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:48:07.909275   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:48:07.909484   83954 main.go:141] libmachine: (addons-545460) Calling .GetMachineName
	I0920 17:48:07.909675   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:07.909837   83954 start.go:159] libmachine.API.Create for "addons-545460" (driver="kvm2")
	I0920 17:48:07.909865   83954 client.go:168] LocalClient.Create starting
	I0920 17:48:07.909907   83954 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca.pem
	I0920 17:48:08.024214   83954 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/cert.pem
	I0920 17:48:08.173163   83954 main.go:141] libmachine: Running pre-create checks...
	I0920 17:48:08.173192   83954 main.go:141] libmachine: (addons-545460) Calling .PreCreateCheck
	I0920 17:48:08.173710   83954 main.go:141] libmachine: (addons-545460) Calling .GetConfigRaw
	I0920 17:48:08.174170   83954 main.go:141] libmachine: Creating machine...
	I0920 17:48:08.174188   83954 main.go:141] libmachine: (addons-545460) Calling .Create
	I0920 17:48:08.174392   83954 main.go:141] libmachine: (addons-545460) Creating KVM machine...
	I0920 17:48:08.175748   83954 main.go:141] libmachine: (addons-545460) DBG | found existing default KVM network
	I0920 17:48:08.176398   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:08.176230   83977 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I0920 17:48:08.176426   83954 main.go:141] libmachine: (addons-545460) DBG | created network xml: 
	I0920 17:48:08.176500   83954 main.go:141] libmachine: (addons-545460) DBG | <network>
	I0920 17:48:08.176546   83954 main.go:141] libmachine: (addons-545460) DBG |   <name>mk-addons-545460</name>
	I0920 17:48:08.176557   83954 main.go:141] libmachine: (addons-545460) DBG |   <dns enable='no'/>
	I0920 17:48:08.176576   83954 main.go:141] libmachine: (addons-545460) DBG |   
	I0920 17:48:08.176589   83954 main.go:141] libmachine: (addons-545460) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 17:48:08.176599   83954 main.go:141] libmachine: (addons-545460) DBG |     <dhcp>
	I0920 17:48:08.176608   83954 main.go:141] libmachine: (addons-545460) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 17:48:08.176617   83954 main.go:141] libmachine: (addons-545460) DBG |     </dhcp>
	I0920 17:48:08.176624   83954 main.go:141] libmachine: (addons-545460) DBG |   </ip>
	I0920 17:48:08.176633   83954 main.go:141] libmachine: (addons-545460) DBG |   
	I0920 17:48:08.176641   83954 main.go:141] libmachine: (addons-545460) DBG | </network>
	I0920 17:48:08.176651   83954 main.go:141] libmachine: (addons-545460) DBG | 
	I0920 17:48:08.181863   83954 main.go:141] libmachine: (addons-545460) DBG | trying to create private KVM network mk-addons-545460 192.168.39.0/24...
	I0920 17:48:08.243032   83954 main.go:141] libmachine: (addons-545460) DBG | private KVM network mk-addons-545460 192.168.39.0/24 created
	I0920 17:48:08.243068   83954 main.go:141] libmachine: (addons-545460) Setting up store path in /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460 ...
	I0920 17:48:08.243080   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:08.243001   83977 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 17:48:08.243124   83954 main.go:141] libmachine: (addons-545460) Building disk image from file:///home/jenkins/minikube-integration/19678-76160/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:48:08.243160   83954 main.go:141] libmachine: (addons-545460) Downloading /home/jenkins/minikube-integration/19678-76160/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19678-76160/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0920 17:48:08.491673   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:08.491546   83977 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa...
	I0920 17:48:08.652751   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:08.652620   83977 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/addons-545460.rawdisk...
	I0920 17:48:08.652777   83954 main.go:141] libmachine: (addons-545460) DBG | Writing magic tar header
	I0920 17:48:08.652788   83954 main.go:141] libmachine: (addons-545460) DBG | Writing SSH key tar header
	I0920 17:48:08.652795   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:08.652732   83977 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460 ...
	I0920 17:48:08.652901   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460
	I0920 17:48:08.652934   83954 main.go:141] libmachine: (addons-545460) Setting executable bit set on /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460 (perms=drwx------)
	I0920 17:48:08.652947   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-76160/.minikube/machines
	I0920 17:48:08.652960   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 17:48:08.652967   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19678-76160
	I0920 17:48:08.652974   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 17:48:08.652982   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home/jenkins
	I0920 17:48:08.652989   83954 main.go:141] libmachine: (addons-545460) DBG | Checking permissions on dir: /home
	I0920 17:48:08.652994   83954 main.go:141] libmachine: (addons-545460) DBG | Skipping /home - not owner
	I0920 17:48:08.653012   83954 main.go:141] libmachine: (addons-545460) Setting executable bit set on /home/jenkins/minikube-integration/19678-76160/.minikube/machines (perms=drwxr-xr-x)
	I0920 17:48:08.653027   83954 main.go:141] libmachine: (addons-545460) Setting executable bit set on /home/jenkins/minikube-integration/19678-76160/.minikube (perms=drwxr-xr-x)
	I0920 17:48:08.653034   83954 main.go:141] libmachine: (addons-545460) Setting executable bit set on /home/jenkins/minikube-integration/19678-76160 (perms=drwxrwxr-x)
	I0920 17:48:08.653040   83954 main.go:141] libmachine: (addons-545460) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 17:48:08.653049   83954 main.go:141] libmachine: (addons-545460) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 17:48:08.653056   83954 main.go:141] libmachine: (addons-545460) Creating domain...
	I0920 17:48:08.654122   83954 main.go:141] libmachine: (addons-545460) define libvirt domain using xml: 
	I0920 17:48:08.654132   83954 main.go:141] libmachine: (addons-545460) <domain type='kvm'>
	I0920 17:48:08.654167   83954 main.go:141] libmachine: (addons-545460)   <name>addons-545460</name>
	I0920 17:48:08.654192   83954 main.go:141] libmachine: (addons-545460)   <memory unit='MiB'>4000</memory>
	I0920 17:48:08.654200   83954 main.go:141] libmachine: (addons-545460)   <vcpu>2</vcpu>
	I0920 17:48:08.654205   83954 main.go:141] libmachine: (addons-545460)   <features>
	I0920 17:48:08.654213   83954 main.go:141] libmachine: (addons-545460)     <acpi/>
	I0920 17:48:08.654217   83954 main.go:141] libmachine: (addons-545460)     <apic/>
	I0920 17:48:08.654221   83954 main.go:141] libmachine: (addons-545460)     <pae/>
	I0920 17:48:08.654225   83954 main.go:141] libmachine: (addons-545460)     
	I0920 17:48:08.654231   83954 main.go:141] libmachine: (addons-545460)   </features>
	I0920 17:48:08.654237   83954 main.go:141] libmachine: (addons-545460)   <cpu mode='host-passthrough'>
	I0920 17:48:08.654242   83954 main.go:141] libmachine: (addons-545460)   
	I0920 17:48:08.654260   83954 main.go:141] libmachine: (addons-545460)   </cpu>
	I0920 17:48:08.654288   83954 main.go:141] libmachine: (addons-545460)   <os>
	I0920 17:48:08.654306   83954 main.go:141] libmachine: (addons-545460)     <type>hvm</type>
	I0920 17:48:08.654314   83954 main.go:141] libmachine: (addons-545460)     <boot dev='cdrom'/>
	I0920 17:48:08.654320   83954 main.go:141] libmachine: (addons-545460)     <boot dev='hd'/>
	I0920 17:48:08.654325   83954 main.go:141] libmachine: (addons-545460)     <bootmenu enable='no'/>
	I0920 17:48:08.654331   83954 main.go:141] libmachine: (addons-545460)   </os>
	I0920 17:48:08.654336   83954 main.go:141] libmachine: (addons-545460)   <devices>
	I0920 17:48:08.654343   83954 main.go:141] libmachine: (addons-545460)     <disk type='file' device='cdrom'>
	I0920 17:48:08.654351   83954 main.go:141] libmachine: (addons-545460)       <source file='/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/boot2docker.iso'/>
	I0920 17:48:08.654358   83954 main.go:141] libmachine: (addons-545460)       <target dev='hdc' bus='scsi'/>
	I0920 17:48:08.654366   83954 main.go:141] libmachine: (addons-545460)       <readonly/>
	I0920 17:48:08.654373   83954 main.go:141] libmachine: (addons-545460)     </disk>
	I0920 17:48:08.654379   83954 main.go:141] libmachine: (addons-545460)     <disk type='file' device='disk'>
	I0920 17:48:08.654390   83954 main.go:141] libmachine: (addons-545460)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 17:48:08.654427   83954 main.go:141] libmachine: (addons-545460)       <source file='/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/addons-545460.rawdisk'/>
	I0920 17:48:08.654451   83954 main.go:141] libmachine: (addons-545460)       <target dev='hda' bus='virtio'/>
	I0920 17:48:08.654462   83954 main.go:141] libmachine: (addons-545460)     </disk>
	I0920 17:48:08.654473   83954 main.go:141] libmachine: (addons-545460)     <interface type='network'>
	I0920 17:48:08.654487   83954 main.go:141] libmachine: (addons-545460)       <source network='mk-addons-545460'/>
	I0920 17:48:08.654498   83954 main.go:141] libmachine: (addons-545460)       <model type='virtio'/>
	I0920 17:48:08.654508   83954 main.go:141] libmachine: (addons-545460)     </interface>
	I0920 17:48:08.654518   83954 main.go:141] libmachine: (addons-545460)     <interface type='network'>
	I0920 17:48:08.654529   83954 main.go:141] libmachine: (addons-545460)       <source network='default'/>
	I0920 17:48:08.654538   83954 main.go:141] libmachine: (addons-545460)       <model type='virtio'/>
	I0920 17:48:08.654544   83954 main.go:141] libmachine: (addons-545460)     </interface>
	I0920 17:48:08.654553   83954 main.go:141] libmachine: (addons-545460)     <serial type='pty'>
	I0920 17:48:08.654560   83954 main.go:141] libmachine: (addons-545460)       <target port='0'/>
	I0920 17:48:08.654572   83954 main.go:141] libmachine: (addons-545460)     </serial>
	I0920 17:48:08.654582   83954 main.go:141] libmachine: (addons-545460)     <console type='pty'>
	I0920 17:48:08.654595   83954 main.go:141] libmachine: (addons-545460)       <target type='serial' port='0'/>
	I0920 17:48:08.654609   83954 main.go:141] libmachine: (addons-545460)     </console>
	I0920 17:48:08.654622   83954 main.go:141] libmachine: (addons-545460)     <rng model='virtio'>
	I0920 17:48:08.654635   83954 main.go:141] libmachine: (addons-545460)       <backend model='random'>/dev/random</backend>
	I0920 17:48:08.654647   83954 main.go:141] libmachine: (addons-545460)     </rng>
	I0920 17:48:08.654656   83954 main.go:141] libmachine: (addons-545460)     
	I0920 17:48:08.654664   83954 main.go:141] libmachine: (addons-545460)     
	I0920 17:48:08.654674   83954 main.go:141] libmachine: (addons-545460)   </devices>
	I0920 17:48:08.654684   83954 main.go:141] libmachine: (addons-545460) </domain>
	I0920 17:48:08.654698   83954 main.go:141] libmachine: (addons-545460) 
	I0920 17:48:08.659098   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:7e:5c:5c in network default
	I0920 17:48:08.659667   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:08.659681   83954 main.go:141] libmachine: (addons-545460) Ensuring networks are active...
	I0920 17:48:08.660326   83954 main.go:141] libmachine: (addons-545460) Ensuring network default is active
	I0920 17:48:08.660644   83954 main.go:141] libmachine: (addons-545460) Ensuring network mk-addons-545460 is active
	I0920 17:48:08.661089   83954 main.go:141] libmachine: (addons-545460) Getting domain xml...
	I0920 17:48:08.661827   83954 main.go:141] libmachine: (addons-545460) Creating domain...
	I0920 17:48:09.836016   83954 main.go:141] libmachine: (addons-545460) Waiting to get IP...
	I0920 17:48:09.836949   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:09.837322   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:09.837362   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:09.837308   83977 retry.go:31] will retry after 230.481748ms: waiting for machine to come up
	I0920 17:48:10.069813   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:10.070211   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:10.070237   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:10.070176   83977 retry.go:31] will retry after 300.144943ms: waiting for machine to come up
	I0920 17:48:10.371673   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:10.372109   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:10.372133   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:10.372064   83977 retry.go:31] will retry after 441.985842ms: waiting for machine to come up
	I0920 17:48:10.815863   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:10.816301   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:10.816328   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:10.816247   83977 retry.go:31] will retry after 472.767253ms: waiting for machine to come up
	I0920 17:48:11.290890   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:11.291463   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:11.291486   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:11.291413   83977 retry.go:31] will retry after 505.600391ms: waiting for machine to come up
	I0920 17:48:11.798341   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:11.798721   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:11.798742   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:11.798687   83977 retry.go:31] will retry after 880.645551ms: waiting for machine to come up
	I0920 17:48:12.680632   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:12.681141   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:12.681167   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:12.681083   83977 retry.go:31] will retry after 958.125131ms: waiting for machine to come up
	I0920 17:48:13.640449   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:13.640921   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:13.640946   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:13.640877   83977 retry.go:31] will retry after 1.147144829s: waiting for machine to come up
	I0920 17:48:14.790142   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:14.790643   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:14.790673   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:14.790585   83977 retry.go:31] will retry after 1.219496298s: waiting for machine to come up
	I0920 17:48:16.012016   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:16.012416   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:16.012446   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:16.012357   83977 retry.go:31] will retry after 2.212939419s: waiting for machine to come up
	I0920 17:48:18.226539   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:18.226915   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:18.226945   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:18.226861   83977 retry.go:31] will retry after 2.872654266s: waiting for machine to come up
	I0920 17:48:21.102727   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:21.103123   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:21.103147   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:21.103079   83977 retry.go:31] will retry after 2.841694616s: waiting for machine to come up
	I0920 17:48:23.946422   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:23.946714   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:23.946755   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:23.946683   83977 retry.go:31] will retry after 3.69812036s: waiting for machine to come up
	I0920 17:48:27.649626   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:27.650125   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find current IP address of domain addons-545460 in network mk-addons-545460
	I0920 17:48:27.650156   83954 main.go:141] libmachine: (addons-545460) DBG | I0920 17:48:27.650078   83977 retry.go:31] will retry after 5.045542451s: waiting for machine to come up
	I0920 17:48:32.701240   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:32.701728   83954 main.go:141] libmachine: (addons-545460) Found IP for machine: 192.168.39.174
	I0920 17:48:32.701757   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has current primary IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:32.701767   83954 main.go:141] libmachine: (addons-545460) Reserving static IP address...
	I0920 17:48:32.702113   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find host DHCP lease matching {name: "addons-545460", mac: "52:54:00:9d:f7:f4", ip: "192.168.39.174"} in network mk-addons-545460
	I0920 17:48:32.774289   83954 main.go:141] libmachine: (addons-545460) DBG | Getting to WaitForSSH function...
	I0920 17:48:32.774318   83954 main.go:141] libmachine: (addons-545460) Reserved static IP address: 192.168.39.174
	I0920 17:48:32.774331   83954 main.go:141] libmachine: (addons-545460) Waiting for SSH to be available...
	I0920 17:48:32.777151   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:32.777435   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460
	I0920 17:48:32.777459   83954 main.go:141] libmachine: (addons-545460) DBG | unable to find defined IP address of network mk-addons-545460 interface with MAC address 52:54:00:9d:f7:f4
	I0920 17:48:32.777676   83954 main.go:141] libmachine: (addons-545460) DBG | Using SSH client type: external
	I0920 17:48:32.777692   83954 main.go:141] libmachine: (addons-545460) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa (-rw-------)
	I0920 17:48:32.777743   83954 main.go:141] libmachine: (addons-545460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:48:32.777763   83954 main.go:141] libmachine: (addons-545460) DBG | About to run SSH command:
	I0920 17:48:32.777776   83954 main.go:141] libmachine: (addons-545460) DBG | exit 0
	I0920 17:48:32.781474   83954 main.go:141] libmachine: (addons-545460) DBG | SSH cmd err, output: exit status 255: 
	I0920 17:48:32.781493   83954 main.go:141] libmachine: (addons-545460) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 17:48:32.781499   83954 main.go:141] libmachine: (addons-545460) DBG | command : exit 0
	I0920 17:48:32.781504   83954 main.go:141] libmachine: (addons-545460) DBG | err     : exit status 255
	I0920 17:48:32.781511   83954 main.go:141] libmachine: (addons-545460) DBG | output  : 
	I0920 17:48:35.784380   83954 main.go:141] libmachine: (addons-545460) DBG | Getting to WaitForSSH function...
	I0920 17:48:35.786794   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:35.787262   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:35.787296   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:35.787455   83954 main.go:141] libmachine: (addons-545460) DBG | Using SSH client type: external
	I0920 17:48:35.787481   83954 main.go:141] libmachine: (addons-545460) DBG | Using SSH private key: /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa (-rw-------)
	I0920 17:48:35.787508   83954 main.go:141] libmachine: (addons-545460) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 17:48:35.787526   83954 main.go:141] libmachine: (addons-545460) DBG | About to run SSH command:
	I0920 17:48:35.787542   83954 main.go:141] libmachine: (addons-545460) DBG | exit 0
	I0920 17:48:35.913498   83954 main.go:141] libmachine: (addons-545460) DBG | SSH cmd err, output: <nil>: 
	I0920 17:48:35.913798   83954 main.go:141] libmachine: (addons-545460) KVM machine creation complete!
	I0920 17:48:35.914106   83954 main.go:141] libmachine: (addons-545460) Calling .GetConfigRaw
	I0920 17:48:35.914660   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:35.914868   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:35.915054   83954 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 17:48:35.915069   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:48:35.916262   83954 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 17:48:35.916278   83954 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 17:48:35.916283   83954 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 17:48:35.916294   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:35.918609   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:35.918967   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:35.918995   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:35.919198   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:35.919400   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:35.919551   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:35.919687   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:35.919826   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:35.920026   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:35.920037   83954 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 17:48:36.024516   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:48:36.024541   83954 main.go:141] libmachine: Detecting the provisioner...
	I0920 17:48:36.024549   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.027441   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.027803   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.027824   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.027934   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.028118   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.028290   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.028426   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.028589   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:36.028807   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:36.028822   83954 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 17:48:36.133827   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 17:48:36.133902   83954 main.go:141] libmachine: found compatible host: buildroot
	I0920 17:48:36.133911   83954 main.go:141] libmachine: Provisioning with buildroot...
	I0920 17:48:36.133919   83954 main.go:141] libmachine: (addons-545460) Calling .GetMachineName
	I0920 17:48:36.134149   83954 buildroot.go:166] provisioning hostname "addons-545460"
	I0920 17:48:36.134174   83954 main.go:141] libmachine: (addons-545460) Calling .GetMachineName
	I0920 17:48:36.134361   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.137038   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.137351   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.137376   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.137530   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.137673   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.137843   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.137974   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.138110   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:36.138283   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:36.138296   83954 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-545460 && echo "addons-545460" | sudo tee /etc/hostname
	I0920 17:48:36.261679   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-545460
	
	I0920 17:48:36.261703   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.264731   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.265097   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.265128   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.265262   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.265524   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.265723   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.265877   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.266044   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:36.266263   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:36.266282   83954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-545460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-545460/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-545460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:48:36.382176   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:48:36.382204   83954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19678-76160/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-76160/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-76160/.minikube}
	I0920 17:48:36.382250   83954 buildroot.go:174] setting up certificates
	I0920 17:48:36.382267   83954 provision.go:84] configureAuth start
	I0920 17:48:36.382281   83954 main.go:141] libmachine: (addons-545460) Calling .GetMachineName
	I0920 17:48:36.382565   83954 main.go:141] libmachine: (addons-545460) Calling .GetIP
	I0920 17:48:36.385424   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.385873   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.385899   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.386054   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.388072   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.388392   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.388417   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.388520   83954 provision.go:143] copyHostCerts
	I0920 17:48:36.388604   83954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-76160/.minikube/key.pem (1679 bytes)
	I0920 17:48:36.388726   83954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-76160/.minikube/ca.pem (1078 bytes)
	I0920 17:48:36.388818   83954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-76160/.minikube/cert.pem (1123 bytes)
	I0920 17:48:36.388877   83954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-76160/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca-key.pem org=jenkins.addons-545460 san=[127.0.0.1 192.168.39.174 addons-545460 localhost minikube]
	I0920 17:48:36.535266   83954 provision.go:177] copyRemoteCerts
	I0920 17:48:36.535327   83954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:48:36.535352   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.537853   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.538130   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.538176   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.538287   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.538548   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.538754   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.538914   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:48:36.625107   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:48:36.648199   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:48:36.670323   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:48:36.692374   83954 provision.go:87] duration metric: took 310.091275ms to configureAuth
	I0920 17:48:36.692398   83954 buildroot.go:189] setting minikube options for container-runtime
	I0920 17:48:36.692572   83954 config.go:182] Loaded profile config "addons-545460": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:48:36.692604   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:36.692919   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.695463   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.695796   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.695821   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.695967   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.696164   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.696325   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.696655   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.696916   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:36.697081   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:36.697092   83954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 17:48:36.802672   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0920 17:48:36.802699   83954 buildroot.go:70] root file system type: tmpfs
	I0920 17:48:36.802814   83954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 17:48:36.802835   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.805671   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.806010   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.806034   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.806212   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.806393   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.806549   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.806671   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.806826   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:36.807002   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:36.807060   83954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 17:48:36.929206   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 17:48:36.929240   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:36.931752   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.932107   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:36.932138   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:36.932269   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:36.932458   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.932611   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:36.932752   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:36.932899   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:36.933063   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:36.933079   83954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 17:48:39.611846   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0920 17:48:39.611881   83954 main.go:141] libmachine: Checking connection to Docker...
	I0920 17:48:39.611894   83954 main.go:141] libmachine: (addons-545460) Calling .GetURL
	I0920 17:48:39.613232   83954 main.go:141] libmachine: (addons-545460) DBG | Using libvirt version 6000000
	I0920 17:48:39.615480   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.615815   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.615845   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.615993   83954 main.go:141] libmachine: Docker is up and running!
	I0920 17:48:39.616008   83954 main.go:141] libmachine: Reticulating splines...
	I0920 17:48:39.616017   83954 client.go:171] duration metric: took 31.706141789s to LocalClient.Create
	I0920 17:48:39.616047   83954 start.go:167] duration metric: took 31.706210913s to libmachine.API.Create "addons-545460"
	I0920 17:48:39.616060   83954 start.go:293] postStartSetup for "addons-545460" (driver="kvm2")
	I0920 17:48:39.616075   83954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:48:39.616101   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:39.616337   83954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:48:39.616358   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:39.618439   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.618704   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.618732   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.618876   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:39.619051   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:39.619217   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:39.619391   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:48:39.704003   83954 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:48:39.708275   83954 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 17:48:39.708297   83954 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-76160/.minikube/addons for local assets ...
	I0920 17:48:39.708368   83954 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-76160/.minikube/files for local assets ...
	I0920 17:48:39.708393   83954 start.go:296] duration metric: took 92.325858ms for postStartSetup
	I0920 17:48:39.708430   83954 main.go:141] libmachine: (addons-545460) Calling .GetConfigRaw
	I0920 17:48:39.709006   83954 main.go:141] libmachine: (addons-545460) Calling .GetIP
	I0920 17:48:39.711368   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.711711   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.711737   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.711996   83954 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/config.json ...
	I0920 17:48:39.712170   83954 start.go:128] duration metric: took 31.81972599s to createHost
	I0920 17:48:39.712192   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:39.714237   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.714592   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.714618   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.714790   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:39.714965   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:39.715120   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:39.715241   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:39.715396   83954 main.go:141] libmachine: Using SSH client type: native
	I0920 17:48:39.715602   83954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0920 17:48:39.715613   83954 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 17:48:39.821862   83954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726854519.805701350
	
	I0920 17:48:39.821888   83954 fix.go:216] guest clock: 1726854519.805701350
	I0920 17:48:39.821898   83954 fix.go:229] Guest: 2024-09-20 17:48:39.80570135 +0000 UTC Remote: 2024-09-20 17:48:39.712181233 +0000 UTC m=+31.920314753 (delta=93.520117ms)
	I0920 17:48:39.821960   83954 fix.go:200] guest clock delta is within tolerance: 93.520117ms
	I0920 17:48:39.821966   83954 start.go:83] releasing machines lock for "addons-545460", held for 31.929615729s
	I0920 17:48:39.821989   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:39.822237   83954 main.go:141] libmachine: (addons-545460) Calling .GetIP
	I0920 17:48:39.824799   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.825098   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.825117   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.825288   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:39.825741   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:39.825920   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:48:39.826032   83954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:48:39.826078   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:39.826134   83954 ssh_runner.go:195] Run: cat /version.json
	I0920 17:48:39.826161   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:48:39.828464   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.828790   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.828816   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.828835   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.828920   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:39.829095   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:39.829220   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:39.829293   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:39.829312   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:39.829322   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:48:39.829490   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:48:39.829629   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:48:39.829744   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:48:39.829863   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:48:39.931245   83954 ssh_runner.go:195] Run: systemctl --version
	I0920 17:48:39.937271   83954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 17:48:39.942749   83954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:48:39.942822   83954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:48:39.959439   83954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 17:48:39.959472   83954 start.go:495] detecting cgroup driver to use...
	I0920 17:48:39.959613   83954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:48:39.977425   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:48:39.988039   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:48:39.998567   83954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:48:39.998634   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:48:40.009211   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:48:40.019982   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:48:40.030241   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:48:40.040906   83954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:48:40.051426   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:48:40.061867   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:48:40.072436   83954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:48:40.083091   83954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:48:40.092390   83954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:48:40.101955   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:48:40.209309   83954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:48:40.233545   83954 start.go:495] detecting cgroup driver to use...
	I0920 17:48:40.233654   83954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 17:48:40.249607   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:48:40.268479   83954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 17:48:40.287083   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 17:48:40.300337   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:48:40.313713   83954 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 17:48:40.344711   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:48:40.358613   83954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:48:40.376551   83954 ssh_runner.go:195] Run: which cri-dockerd
	I0920 17:48:40.380470   83954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:48:40.390092   83954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:48:40.406053   83954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 17:48:40.518019   83954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 17:48:40.635232   83954 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:48:40.635384   83954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:48:40.651692   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:48:40.760140   83954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:48:43.116038   83954 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.355854096s)
	I0920 17:48:43.116109   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 17:48:43.129557   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:48:43.142600   83954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 17:48:43.267599   83954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 17:48:43.392973   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:48:43.513989   83954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 17:48:43.530415   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:48:43.543587   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:48:43.663689   83954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 17:48:43.739834   83954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 17:48:43.739910   83954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 17:48:43.745697   83954 start.go:563] Will wait 60s for crictl version
	I0920 17:48:43.745760   83954 ssh_runner.go:195] Run: which crictl
	I0920 17:48:43.750550   83954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:48:43.794218   83954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0920 17:48:43.794297   83954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:48:43.820725   83954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:48:43.846186   83954 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0920 17:48:43.846238   83954 main.go:141] libmachine: (addons-545460) Calling .GetIP
	I0920 17:48:43.848537   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:43.848889   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:48:43.848922   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:48:43.849173   83954 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 17:48:43.853229   83954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:48:43.866080   83954 kubeadm.go:883] updating cluster {Name:addons-545460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-545460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:48:43.866210   83954 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:48:43.866328   83954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:48:43.882902   83954 docker.go:685] Got preloaded images: 
	I0920 17:48:43.882927   83954 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0920 17:48:43.882977   83954 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 17:48:43.892767   83954 ssh_runner.go:195] Run: which lz4
	I0920 17:48:43.896682   83954 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 17:48:43.900591   83954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 17:48:43.900619   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0920 17:48:45.005033   83954 docker.go:649] duration metric: took 1.108401728s to copy over tarball
	I0920 17:48:45.005103   83954 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 17:48:46.816644   83954 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.811501258s)
	I0920 17:48:46.816690   83954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 17:48:46.850018   83954 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0920 17:48:46.863732   83954 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0920 17:48:46.881105   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:48:46.990991   83954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:48:50.273707   83954 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.282664353s)
	I0920 17:48:50.273833   83954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:48:50.293406   83954 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:48:50.293436   83954 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:48:50.293450   83954 kubeadm.go:934] updating node { 192.168.39.174 8443 v1.31.1 docker true true} ...
	I0920 17:48:50.293799   83954 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-545460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-545460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:48:50.293886   83954 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 17:48:50.346476   83954 cni.go:84] Creating CNI manager for ""
	I0920 17:48:50.346513   83954 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:48:50.346526   83954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:48:50.346550   83954 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-545460 NodeName:addons-545460 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:48:50.346736   83954 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-545460"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:48:50.346813   83954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:48:50.357186   83954 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:48:50.357273   83954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:48:50.367082   83954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0920 17:48:50.383508   83954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:48:50.399868   83954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0920 17:48:50.416008   83954 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0920 17:48:50.419751   83954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:48:50.431840   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:48:50.540588   83954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:48:50.560157   83954 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460 for IP: 192.168.39.174
	I0920 17:48:50.560178   83954 certs.go:194] generating shared ca certs ...
	I0920 17:48:50.560198   83954 certs.go:226] acquiring lock for ca certs: {Name:mk3fd1d97ba01cef92a791ef76bcf5834811fbc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:50.560372   83954 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-76160/.minikube/ca.key
	I0920 17:48:50.718881   83954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-76160/.minikube/ca.crt ...
	I0920 17:48:50.718908   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/ca.crt: {Name:mkfed53e719c3d70782ae5039f08502b88dfce17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:50.719100   83954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-76160/.minikube/ca.key ...
	I0920 17:48:50.719117   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/ca.key: {Name:mkb24ef326839e1c6b1e9153f5046c4d8b0e2275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:50.719218   83954 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.key
	I0920 17:48:50.893831   83954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.crt ...
	I0920 17:48:50.893870   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.crt: {Name:mk14764d5695bef95a1883a2f7c8fba10066e5df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:50.894061   83954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.key ...
	I0920 17:48:50.894076   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.key: {Name:mk2ee05d7f49aa14d630956db567fcf5994c9022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:50.894170   83954 certs.go:256] generating profile certs ...
	I0920 17:48:50.894230   83954 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.key
	I0920 17:48:50.894244   83954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt with IP's: []
	I0920 17:48:51.285546   83954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt ...
	I0920 17:48:51.285577   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: {Name:mk1b046233fcf8ad276463bcec2a0239090c4d8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:51.285780   83954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.key ...
	I0920 17:48:51.285798   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.key: {Name:mk25ad4bfc169bb1f6d289440cbe76da951e3fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:51.285901   83954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.key.03e51470
	I0920 17:48:51.285923   83954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.crt.03e51470 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.174]
	I0920 17:48:51.496971   83954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.crt.03e51470 ...
	I0920 17:48:51.497005   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.crt.03e51470: {Name:mk21e1b6843288edb2504715b36d613399dfa95b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:51.497199   83954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.key.03e51470 ...
	I0920 17:48:51.497218   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.key.03e51470: {Name:mk0994652d3aab5424c6c98dd2d70ee4086d9c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:51.497324   83954 certs.go:381] copying /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.crt.03e51470 -> /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.crt
	I0920 17:48:51.497438   83954 certs.go:385] copying /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.key.03e51470 -> /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.key
	I0920 17:48:51.497504   83954 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.key
	I0920 17:48:51.497524   83954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.crt with IP's: []
	I0920 17:48:51.587215   83954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.crt ...
	I0920 17:48:51.587244   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.crt: {Name:mk4ca7eeadee4681edc6f4d55ad056102d3fcc05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:51.587416   83954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.key ...
	I0920 17:48:51.587430   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.key: {Name:mkb601a7fb7981335da2cb38a9c13132b9a75f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:48:51.587625   83954 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:48:51.587661   83954 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:48:51.587686   83954 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:48:51.587711   83954 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-76160/.minikube/certs/key.pem (1679 bytes)
	I0920 17:48:51.588377   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:48:51.613193   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 17:48:51.636272   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:48:51.659131   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:48:51.682138   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:48:51.704937   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:48:51.727730   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:48:51.750268   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 17:48:51.772918   83954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-76160/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:48:51.795733   83954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:48:51.811505   83954 ssh_runner.go:195] Run: openssl version
	I0920 17:48:51.817255   83954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:48:51.827544   83954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:48:51.831892   83954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:48:51.831952   83954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:48:51.837671   83954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:48:51.847949   83954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:48:51.851879   83954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:48:51.851924   83954 kubeadm.go:392] StartCluster: {Name:addons-545460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-545460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:48:51.852025   83954 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 17:48:51.867700   83954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:48:51.877139   83954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:48:51.886405   83954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:48:51.895585   83954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:48:51.895606   83954 kubeadm.go:157] found existing configuration files:
	
	I0920 17:48:51.895649   83954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:48:51.904276   83954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:48:51.904323   83954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:48:51.913247   83954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:48:51.921912   83954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:48:51.921985   83954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:48:51.930693   83954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:48:51.940363   83954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:48:51.940394   83954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:48:51.949182   83954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:48:51.957587   83954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:48:51.957634   83954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:48:51.966715   83954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:48:52.014432   83954 kubeadm.go:310] W0920 17:48:51.998704    1519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:48:52.015162   83954 kubeadm.go:310] W0920 17:48:51.999714    1519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:48:52.123487   83954 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:49:02.409134   83954 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:49:02.409234   83954 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:49:02.409348   83954 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:49:02.409464   83954 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:49:02.409606   83954 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:49:02.409702   83954 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:49:02.411402   83954 out.go:235]   - Generating certificates and keys ...
	I0920 17:49:02.411502   83954 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:49:02.411592   83954 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:49:02.411685   83954 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:49:02.411767   83954 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:49:02.411849   83954 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:49:02.411929   83954 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:49:02.411997   83954 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:49:02.412176   83954 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-545460 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0920 17:49:02.412254   83954 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:49:02.412352   83954 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-545460 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0920 17:49:02.412418   83954 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:49:02.412505   83954 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:49:02.412565   83954 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:49:02.412637   83954 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:49:02.412714   83954 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:49:02.412764   83954 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:49:02.412809   83954 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:49:02.412867   83954 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:49:02.412922   83954 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:49:02.412993   83954 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:49:02.413066   83954 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:49:02.414713   83954 out.go:235]   - Booting up control plane ...
	I0920 17:49:02.414797   83954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:49:02.414892   83954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:49:02.414960   83954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:49:02.415069   83954 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:49:02.415170   83954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:49:02.415213   83954 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:49:02.415365   83954 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:49:02.415485   83954 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:49:02.415563   83954 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001256567s
	I0920 17:49:02.415659   83954 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:49:02.415725   83954 kubeadm.go:310] [api-check] The API server is healthy after 5.001342299s
	I0920 17:49:02.415825   83954 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:49:02.415984   83954 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:49:02.416078   83954 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:49:02.416243   83954 kubeadm.go:310] [mark-control-plane] Marking the node addons-545460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:49:02.416325   83954 kubeadm.go:310] [bootstrap-token] Using token: b5qlyc.i6pxpn8g9b1mexd1
	I0920 17:49:02.417567   83954 out.go:235]   - Configuring RBAC rules ...
	I0920 17:49:02.417673   83954 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:49:02.417759   83954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:49:02.417944   83954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:49:02.418057   83954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:49:02.418153   83954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:49:02.418225   83954 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:49:02.418320   83954 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:49:02.418361   83954 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:49:02.418403   83954 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:49:02.418409   83954 kubeadm.go:310] 
	I0920 17:49:02.418457   83954 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:49:02.418465   83954 kubeadm.go:310] 
	I0920 17:49:02.418549   83954 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:49:02.418556   83954 kubeadm.go:310] 
	I0920 17:49:02.418578   83954 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:49:02.418627   83954 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:49:02.418710   83954 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:49:02.418726   83954 kubeadm.go:310] 
	I0920 17:49:02.418788   83954 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:49:02.418795   83954 kubeadm.go:310] 
	I0920 17:49:02.418844   83954 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:49:02.418853   83954 kubeadm.go:310] 
	I0920 17:49:02.418899   83954 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:49:02.418965   83954 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:49:02.419042   83954 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:49:02.419051   83954 kubeadm.go:310] 
	I0920 17:49:02.419165   83954 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:49:02.419236   83954 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:49:02.419242   83954 kubeadm.go:310] 
	I0920 17:49:02.419310   83954 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b5qlyc.i6pxpn8g9b1mexd1 \
	I0920 17:49:02.419396   83954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c5f528c03e51be4d45dffa8bbd8a3f9d58c3bfd34ad2c40b6f9a1a38d47cba7 \
	I0920 17:49:02.419416   83954 kubeadm.go:310] 	--control-plane 
	I0920 17:49:02.419421   83954 kubeadm.go:310] 
	I0920 17:49:02.419494   83954 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:49:02.419501   83954 kubeadm.go:310] 
	I0920 17:49:02.419567   83954 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b5qlyc.i6pxpn8g9b1mexd1 \
	I0920 17:49:02.419680   83954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0c5f528c03e51be4d45dffa8bbd8a3f9d58c3bfd34ad2c40b6f9a1a38d47cba7 
	I0920 17:49:02.419695   83954 cni.go:84] Creating CNI manager for ""
	I0920 17:49:02.419717   83954 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:49:02.421229   83954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:49:02.422449   83954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:49:02.432999   83954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:49:02.449732   83954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:49:02.449788   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:02.449851   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-545460 minikube.k8s.io/updated_at=2024_09_20T17_49_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-545460 minikube.k8s.io/primary=true
	I0920 17:49:02.465964   83954 ops.go:34] apiserver oom_adj: -16
	I0920 17:49:02.590417   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:03.090593   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:03.591155   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:04.091391   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:04.590969   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:05.090619   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:05.590476   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:06.090648   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:06.590482   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:07.091060   83954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:07.205359   83954 kubeadm.go:1113] duration metric: took 4.755622859s to wait for elevateKubeSystemPrivileges
	I0920 17:49:07.205423   83954 kubeadm.go:394] duration metric: took 15.353502069s to StartCluster
	I0920 17:49:07.205450   83954 settings.go:142] acquiring lock: {Name:mkbfd05584ffbf20e5475d0208da1e5d2124d592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:07.205615   83954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-76160/kubeconfig
	I0920 17:49:07.206231   83954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-76160/kubeconfig: {Name:mk68a361a46e6b636f6a31be141717aa7441405d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:07.206469   83954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:49:07.206508   83954 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:49:07.206600   83954 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:49:07.206728   83954 addons.go:69] Setting yakd=true in profile "addons-545460"
	I0920 17:49:07.206764   83954 addons.go:234] Setting addon yakd=true in "addons-545460"
	I0920 17:49:07.206797   83954 addons.go:69] Setting default-storageclass=true in profile "addons-545460"
	I0920 17:49:07.206809   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.207202   83954 addons.go:69] Setting ingress=true in profile "addons-545460"
	I0920 17:49:07.207225   83954 addons.go:69] Setting ingress-dns=true in profile "addons-545460"
	I0920 17:49:07.207281   83954 addons.go:234] Setting addon ingress-dns=true in "addons-545460"
	I0920 17:49:07.207250   83954 addons.go:234] Setting addon ingress=true in "addons-545460"
	I0920 17:49:07.207345   83954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-545460"
	I0920 17:49:07.207313   83954 config.go:182] Loaded profile config "addons-545460": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:49:07.207396   83954 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-545460"
	I0920 17:49:07.207436   83954 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-545460"
	I0920 17:49:07.207441   83954 addons.go:69] Setting metrics-server=true in profile "addons-545460"
	I0920 17:49:07.207460   83954 addons.go:234] Setting addon metrics-server=true in "addons-545460"
	I0920 17:49:07.207473   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.207483   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.207927   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.207959   83954 addons.go:69] Setting registry=true in profile "addons-545460"
	I0920 17:49:07.207982   83954 addons.go:234] Setting addon registry=true in "addons-545460"
	I0920 17:49:07.207983   83954 addons.go:69] Setting cloud-spanner=true in profile "addons-545460"
	I0920 17:49:07.207988   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.207998   83954 addons.go:234] Setting addon cloud-spanner=true in "addons-545460"
	I0920 17:49:07.208004   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.208016   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.208029   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.208032   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.208056   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.208156   83954 addons.go:69] Setting gcp-auth=true in profile "addons-545460"
	I0920 17:49:07.208184   83954 mustload.go:65] Loading cluster: addons-545460
	I0920 17:49:07.208403   83954 config.go:182] Loaded profile config "addons-545460": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:49:07.208470   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.208505   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.208506   83954 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-545460"
	I0920 17:49:07.208561   83954 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-545460"
	I0920 17:49:07.208605   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.208979   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.207970   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.209020   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.209047   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.209089   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.206802   83954 addons.go:69] Setting inspektor-gadget=true in profile "addons-545460"
	I0920 17:49:07.209678   83954 addons.go:234] Setting addon inspektor-gadget=true in "addons-545460"
	I0920 17:49:07.209786   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.210182   83954 addons.go:69] Setting storage-provisioner=true in profile "addons-545460"
	I0920 17:49:07.210199   83954 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-545460"
	I0920 17:49:07.210226   83954 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-545460"
	I0920 17:49:07.210232   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.210434   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.210496   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.210717   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.210740   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.210766   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.210780   83954 addons.go:69] Setting volumesnapshots=true in profile "addons-545460"
	I0920 17:49:07.210799   83954 addons.go:234] Setting addon volumesnapshots=true in "addons-545460"
	I0920 17:49:07.210825   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.210959   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.210210   83954 addons.go:234] Setting addon storage-provisioner=true in "addons-545460"
	I0920 17:49:07.211369   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.211791   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.211845   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.215936   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.216015   83954 out.go:177] * Verifying Kubernetes components...
	I0920 17:49:07.217529   83954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:49:07.217653   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.221674   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.222270   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.222320   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.210771   83954 addons.go:69] Setting volcano=true in profile "addons-545460"
	I0920 17:49:07.225564   83954 addons.go:234] Setting addon volcano=true in "addons-545460"
	I0920 17:49:07.225601   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.226088   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.226135   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.229605   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0920 17:49:07.230156   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.230274   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34919
	I0920 17:49:07.230625   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.230822   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.230842   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.231122   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.231147   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.231258   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.231955   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.232001   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.232237   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.232872   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.232914   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.235730   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0920 17:49:07.241205   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0920 17:49:07.241368   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43343
	I0920 17:49:07.241818   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.242624   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.242644   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.242721   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.243061   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.243777   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.243839   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.248711   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0920 17:49:07.249499   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
	I0920 17:49:07.249646   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42655
	I0920 17:49:07.249717   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0920 17:49:07.249793   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.249814   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.250369   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.250423   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.250495   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.251352   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.251412   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.251468   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.251526   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.251615   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.251652   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.251695   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.252544   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.252590   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.252740   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.252763   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.252856   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.252882   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.252955   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.252969   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.253213   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.253683   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.254194   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.254227   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.254846   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.254886   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.255299   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.255316   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.255407   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.255798   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.256106   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.256655   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.256693   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.285914   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.286376   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.286419   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.286763   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0920 17:49:07.286925   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44567
	I0920 17:49:07.286970   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.286974   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0920 17:49:07.287103   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0920 17:49:07.287417   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0920 17:49:07.287437   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.287478   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.287583   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.287796   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.287914   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.288068   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.288103   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.288123   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0920 17:49:07.288342   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.288356   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.288422   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.288435   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.288426   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.288577   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.288590   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.288830   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.288902   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.288950   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.288965   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.289271   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.289290   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.289367   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.289458   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.289503   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.289564   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.289603   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.289626   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.289638   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.289698   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.290616   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.291212   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.291254   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.291847   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.291933   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.291949   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.292509   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.292900   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.293034   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.293069   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.294433   83954 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-545460"
	I0920 17:49:07.294456   83954 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:49:07.294592   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.294477   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.295015   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.295046   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.295986   83954 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:49:07.296007   83954 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:49:07.296027   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.296699   83954 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:49:07.297330   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0920 17:49:07.297801   83954 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:49:07.297818   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:49:07.297835   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.298515   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.299466   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
	I0920 17:49:07.299898   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.300388   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.300411   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.300749   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.300934   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.301671   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.301686   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0920 17:49:07.302122   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.302324   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.302833   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.302857   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.302887   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.302980   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.303233   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.303301   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.303523   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.303574   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.303798   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.303836   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.304027   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.304328   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.304865   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.304887   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.305280   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.305608   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.307349   83954 addons.go:234] Setting addon default-storageclass=true in "addons-545460"
	I0920 17:49:07.307397   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:07.307660   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0920 17:49:07.307770   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42887
	I0920 17:49:07.307773   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.307811   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.308192   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.308705   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.308722   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.308908   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.308925   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.309023   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.309464   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.310094   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.310137   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.310692   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.310972   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.311911   83954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:49:07.312683   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.313870   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.314180   83954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:49:07.314236   83954 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:49:07.314494   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.314523   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.314937   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.315161   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.317075   83954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:49:07.317124   83954 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:49:07.317296   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.319854   83954 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:49:07.319878   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:49:07.319898   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.320013   83954 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:49:07.320027   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:49:07.320039   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.320080   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:49:07.322595   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:49:07.323825   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:49:07.323920   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I0920 17:49:07.324363   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.324462   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.324897   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.324920   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.324989   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.325010   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.325043   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.325112   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.325169   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.325418   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.325490   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.325554   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.325682   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.325771   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.325966   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.326071   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.326081   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.326185   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:49:07.326212   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.326472   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.328298   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I0920 17:49:07.328554   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.328578   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:49:07.328827   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.329372   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.329452   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.329863   83954 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 17:49:07.330051   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.330264   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.330980   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:49:07.331080   83954 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:49:07.331096   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:49:07.331114   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.332686   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0920 17:49:07.333233   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.333357   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:49:07.333829   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.333847   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.334296   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.334344   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.334656   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0920 17:49:07.334709   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.334758   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.334782   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0920 17:49:07.335036   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.335139   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.335192   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.335315   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.335389   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.335426   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.335436   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.335556   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:49:07.335585   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.335604   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.335620   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.335561   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.336462   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.336665   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.337177   83954 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 17:49:07.337186   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:49:07.337203   83954 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:49:07.337225   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.337820   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0920 17:49:07.338438   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.338512   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.338592   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.338616   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.339015   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.339139   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0920 17:49:07.339294   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.339627   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.339933   83954 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 17:49:07.340207   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.340225   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.340452   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I0920 17:49:07.340610   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.340797   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.341151   83954 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:49:07.341496   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.342181   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.342196   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.342372   83954 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 17:49:07.342468   83954 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:49:07.342491   83954 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:49:07.342512   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.343162   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.343225   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.343355   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.343368   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.343432   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.344182   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0920 17:49:07.344602   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.344836   83954 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:49:07.344862   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 17:49:07.344879   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.344903   83954 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:49:07.345498   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.345797   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.345901   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.345915   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.346133   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.346142   83954 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:49:07.346153   83954 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:49:07.346135   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.346177   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.346166   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.346200   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.346396   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.346611   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.346770   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.346951   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.347346   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.348451   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:07.349148   83954 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:49:07.349240   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:07.349852   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.349879   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.349894   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.349928   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.350039   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.350090   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.350093   83954 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:49:07.350108   83954 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:49:07.350131   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.350256   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.350530   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.350549   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.350577   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.350621   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.350746   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.350802   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.351410   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.351463   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.351802   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.351818   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.351874   83954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:49:07.352002   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.352515   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.352886   83954 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:07.352899   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:49:07.352914   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.352971   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.353151   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.353316   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.353567   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.354468   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0920 17:49:07.354910   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.355326   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.355346   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.355457   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.355473   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.355797   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.355998   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.357720   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.357872   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.358037   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.358213   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.358839   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.360498   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.360534   83954 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:49:07.361012   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.361028   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.361155   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.361301   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.361452   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.361550   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.361727   83954 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:49:07.361762   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:49:07.361789   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.364760   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.365177   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.365309   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.365329   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.365508   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.365665   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.365797   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.368763   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39415
	I0920 17:49:07.369161   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.369275   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0920 17:49:07.369619   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:07.369704   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.369723   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.370011   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.370113   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.370144   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:07.370159   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:07.370475   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:07.370796   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:07.371975   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.372409   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:07.372570   83954 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:07.372586   83954 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:49:07.372603   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.373703   83954 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:49:07.375218   83954 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:49:07.375250   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.375574   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.375603   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.375691   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.375821   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.375913   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.375977   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.376560   83954 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:49:07.376577   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:49:07.376596   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:07.379174   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.379480   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:07.379501   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:07.379642   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:07.379813   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:07.379911   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:07.380420   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:07.875024   83954 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:49:07.875049   83954 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:49:07.905678   83954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:49:07.905806   83954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:49:07.970298   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:49:07.981137   83954 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:49:07.981168   83954 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:49:08.000730   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:49:08.022875   83954 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:49:08.022915   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:49:08.064186   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:49:08.064213   83954 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:49:08.132065   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:08.141309   83954 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:49:08.141336   83954 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:49:08.163057   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:49:08.246632   83954 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:49:08.246665   83954 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:49:08.256858   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:49:08.320766   83954 node_ready.go:35] waiting up to 6m0s for node "addons-545460" to be "Ready" ...
	I0920 17:49:08.325100   83954 node_ready.go:49] node "addons-545460" has status "Ready":"True"
	I0920 17:49:08.325129   83954 node_ready.go:38] duration metric: took 4.323581ms for node "addons-545460" to be "Ready" ...
	I0920 17:49:08.325140   83954 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:49:08.335647   83954 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:08.337934   83954 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:49:08.337957   83954 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:49:08.370321   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:49:08.392267   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:49:08.448905   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:08.607680   83954 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:49:08.607721   83954 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:49:08.655137   83954 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:49:08.655174   83954 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:49:08.822488   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:49:08.822521   83954 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:49:08.843348   83954 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:49:08.843368   83954 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:49:08.877776   83954 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:49:08.877802   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:49:08.914707   83954 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:49:08.914734   83954 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:49:09.046912   83954 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:49:09.046945   83954 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:49:09.048420   83954 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:49:09.048447   83954 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:49:09.120460   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:49:09.120486   83954 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:49:09.162917   83954 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:49:09.162941   83954 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:49:09.228740   83954 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:49:09.228783   83954 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:49:09.246321   83954 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:49:09.246351   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:49:09.252632   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:49:09.344440   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:49:09.344469   83954 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:49:09.392527   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:49:09.414746   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:49:09.414785   83954 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:49:09.419879   83954 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:49:09.419903   83954 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:49:09.477506   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:49:09.541182   83954 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:49:09.541218   83954 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:49:09.580721   83954 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:09.580746   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:49:09.585681   83954 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:49:09.585702   83954 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:49:09.883234   83954 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:49:09.883264   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:49:09.894417   83954 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:49:09.894438   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:49:10.002726   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:10.082745   83954 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:49:10.082786   83954 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:49:10.085522   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:49:10.341119   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:10.401242   83954 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:49:10.401281   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:49:10.460461   83954 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.554617291s)
	I0920 17:49:10.460506   83954 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 17:49:10.719595   83954 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:49:10.719619   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:49:10.963799   83954 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-545460" context rescaled to 1 replicas
	I0920 17:49:10.964730   83954 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:49:10.964758   83954 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:49:11.123291   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:49:12.470528   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:14.385658   83954 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:49:14.388041   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:14.391282   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:14.391748   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:14.391779   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:14.391930   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:14.392128   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:14.392365   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:14.392512   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:14.811599   83954 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:49:14.841993   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:14.872297   83954 addons.go:234] Setting addon gcp-auth=true in "addons-545460"
	I0920 17:49:14.872357   83954 host.go:66] Checking if "addons-545460" exists ...
	I0920 17:49:14.872682   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:14.872727   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:14.889318   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I0920 17:49:14.889788   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:14.890344   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:14.890372   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:14.890709   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:14.891228   83954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 17:49:14.891275   83954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 17:49:14.906590   83954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0920 17:49:14.907075   83954 main.go:141] libmachine: () Calling .GetVersion
	I0920 17:49:14.907559   83954 main.go:141] libmachine: Using API Version  1
	I0920 17:49:14.907579   83954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 17:49:14.907926   83954 main.go:141] libmachine: () Calling .GetMachineName
	I0920 17:49:14.908114   83954 main.go:141] libmachine: (addons-545460) Calling .GetState
	I0920 17:49:14.909489   83954 main.go:141] libmachine: (addons-545460) Calling .DriverName
	I0920 17:49:14.909717   83954 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:49:14.909749   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHHostname
	I0920 17:49:14.912869   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:14.913317   83954 main.go:141] libmachine: (addons-545460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:f7:f4", ip: ""} in network mk-addons-545460: {Iface:virbr1 ExpiryTime:2024-09-20 18:48:22 +0000 UTC Type:0 Mac:52:54:00:9d:f7:f4 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:addons-545460 Clientid:01:52:54:00:9d:f7:f4}
	I0920 17:49:14.913347   83954 main.go:141] libmachine: (addons-545460) DBG | domain addons-545460 has defined IP address 192.168.39.174 and MAC address 52:54:00:9d:f7:f4 in network mk-addons-545460
	I0920 17:49:14.913478   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHPort
	I0920 17:49:14.913650   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHKeyPath
	I0920 17:49:14.913829   83954 main.go:141] libmachine: (addons-545460) Calling .GetSSHUsername
	I0920 17:49:14.913975   83954 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/addons-545460/id_rsa Username:docker}
	I0920 17:49:16.867240   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:18.963022   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:19.860853   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.890498768s)
	I0920 17:49:19.860914   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.860148909s)
	I0920 17:49:19.860934   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.860951   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.860954   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.860968   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.860985   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.728887635s)
	I0920 17:49:19.861024   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861049   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861071   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.697975619s)
	I0920 17:49:19.861145   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861167   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861191   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.604302541s)
	I0920 17:49:19.861231   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861249   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861343   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.490996521s)
	I0920 17:49:19.861363   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861370   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861570   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.469271651s)
	I0920 17:49:19.861591   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861599   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861641   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.412702176s)
	I0920 17:49:19.861659   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.609005279s)
	I0920 17:49:19.861666   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861674   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861676   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861684   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861771   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.469211454s)
	I0920 17:49:19.861783   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861791   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861846   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.384312039s)
	I0920 17:49:19.861859   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.861867   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.861972   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.859200345s)
	W0920 17:49:19.862007   83954 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:49:19.862037   83954 retry.go:31] will retry after 310.388703ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:49:19.862132   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.776578229s)
	I0920 17:49:19.862152   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.862161   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.864062   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.864088   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.864098   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.864105   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.864395   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.864436   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.864443   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.864451   83954 addons.go:475] Verifying addon ingress=true in "addons-545460"
	I0920 17:49:19.864851   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.864932   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.864988   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865014   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865031   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865047   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865123   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865163   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865180   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865196   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865222   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865470   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865514   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865522   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865524   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865531   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865539   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865557   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865564   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865572   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865578   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865596   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865620   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865626   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865633   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865639   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865685   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865756   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865766   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865772   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865777   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865786   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865793   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865795   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865803   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865810   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865816   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865871   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865878   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865886   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865892   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.865919   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.865943   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.865949   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.865957   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.865963   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.866008   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.866065   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.866071   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.866078   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.866083   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.866158   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.866165   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.866174   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:19.866183   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:19.868640   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868656   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868668   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.868669   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868680   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868690   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.868695   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868701   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.868745   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868782   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868818   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.868832   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868872   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868891   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868909   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868923   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.868927   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.868932   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.868948   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868967   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868936   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.869134   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.869320   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.869339   83954 addons.go:475] Verifying addon registry=true in "addons-545460"
	I0920 17:49:19.869293   83954 addons.go:475] Verifying addon metrics-server=true in "addons-545460"
	I0920 17:49:19.868897   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.868992   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.869484   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.869541   83954 out.go:177] * Verifying ingress addon...
	I0920 17:49:19.868975   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.869930   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.869009   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:19.870002   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:19.869226   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:19.871052   83954 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-545460 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:49:19.871061   83954 out.go:177] * Verifying registry addon...
	I0920 17:49:19.872570   83954 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:49:19.874019   83954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:49:19.992324   83954 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:49:19.992350   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:19.993151   83954 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:49:19.993176   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:20.102132   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:20.102162   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:20.102310   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:20.102329   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:20.102434   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:20.102450   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 17:49:20.102546   83954 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 17:49:20.102761   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:20.102778   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:20.102808   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:20.172592   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:20.193338   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.069987408s)
	I0920 17:49:20.193387   83954 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.283644255s)
	I0920 17:49:20.193423   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:20.193438   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:20.193713   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:20.193742   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:20.193758   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:20.193773   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:20.193781   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:20.193996   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:20.194017   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:20.194028   83954 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-545460"
	I0920 17:49:20.195638   83954 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:49:20.195665   83954 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:49:20.196899   83954 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:49:20.197848   83954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:49:20.198021   83954 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:49:20.198054   83954 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:49:20.327501   83954 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:49:20.327528   83954 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:49:20.459387   83954 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:49:20.459414   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:20.514996   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:20.515540   83954 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:49:20.515561   83954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:49:20.515625   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:20.627663   83954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:49:20.730197   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:20.885056   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:20.886054   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:21.203300   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:21.353657   83954 pod_ready.go:98] pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.174 HostIPs:[{IP:192.168.39
.174}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 17:49:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:49:08 +0000 UTC,FinishedAt:2024-09-20 17:49:18 +0000 UTC,ContainerID:docker://23f5cbfcee17a77088b5f8317b83f0611c59f16394e5e0dd827a5a15080a307a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://23f5cbfcee17a77088b5f8317b83f0611c59f16394e5e0dd827a5a15080a307a Started:0xc001c04220 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a7b380} {Name:kube-api-access-ntrkk MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0xc001a7b390}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:49:21.353687   83954 pod_ready.go:82] duration metric: took 13.018016692s for pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace to be "Ready" ...
	E0920 17:49:21.353699   83954 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-549g6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:21 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:07 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.174 HostIPs:[{IP:192.168.39.174}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 17:49:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:49:08 +0000 UTC,FinishedAt:2024-09-20 17:49:18 +0000 UTC,ContainerID:docker://23f5cbfcee17a77088b5f8317b83f0611c59f16394e5e0dd827a5a15080a307a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://23f5cbfcee17a77088b5f8317b83f0611c59f16394e5e0dd827a5a15080a307a Started:0xc001c04220 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001a7b380} {Name:kube-api-access-ntrkk MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001a7b390}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:49:21.353711   83954 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:21.385421   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:21.385868   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:21.719004   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:21.876931   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:21.877302   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:22.202188   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:22.382160   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:22.383963   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:22.474850   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.302208912s)
	I0920 17:49:22.474911   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:22.474928   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:22.475246   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:22.475301   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:22.475318   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:22.475331   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:22.475342   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:22.475637   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:22.475655   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:22.480471   83954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.852766474s)
	I0920 17:49:22.480515   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:22.480532   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:22.480793   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:22.480808   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:22.480816   83954 main.go:141] libmachine: Making call to close driver server
	I0920 17:49:22.480833   83954 main.go:141] libmachine: (addons-545460) Calling .Close
	I0920 17:49:22.481034   83954 main.go:141] libmachine: Successfully made call to close driver server
	I0920 17:49:22.481062   83954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 17:49:22.481063   83954 main.go:141] libmachine: (addons-545460) DBG | Closing plugin on server side
	I0920 17:49:22.483612   83954 addons.go:475] Verifying addon gcp-auth=true in "addons-545460"
	I0920 17:49:22.485408   83954 out.go:177] * Verifying gcp-auth addon...
	I0920 17:49:22.487584   83954 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:49:22.497753   83954 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:49:22.701933   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:22.879043   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:22.879087   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:23.202520   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:23.359769   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:23.377315   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:23.377878   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:23.702884   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:23.877586   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:23.877624   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:24.204080   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:24.376827   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:24.378319   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:24.702550   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:24.877253   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:24.878280   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:25.203247   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:25.364400   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:25.378069   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:25.378119   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:25.702414   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:25.876081   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:25.877403   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:26.347205   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:26.447363   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:26.447518   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:26.702307   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:26.878017   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:26.878207   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:27.203084   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:27.377062   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:27.377648   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:28.042955   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:28.044408   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:28.044776   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:28.045531   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:28.206641   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:28.381696   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:28.382260   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:28.703460   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:28.876410   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:28.877433   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:29.202929   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:29.377760   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:29.378250   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:29.703594   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:29.876876   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:29.878410   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:30.203473   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:30.360282   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:30.376292   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:30.377590   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:30.702475   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:30.876963   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:30.878089   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:31.202597   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:31.376764   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:31.383051   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:31.704445   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:31.880186   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:31.880322   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:32.202712   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:32.385452   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:32.418856   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:32.418900   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:32.703318   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:32.876375   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:32.877811   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:33.202783   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:33.376623   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:33.378105   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:33.702680   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:33.877754   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:33.877891   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:34.202188   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:34.377558   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:34.378215   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:34.702606   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:34.859161   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:34.877541   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:34.877668   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:35.202439   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:35.376809   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:35.377685   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:35.702855   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:35.876910   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:35.878178   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:36.208347   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:36.377360   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:36.380806   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:36.712980   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:36.861999   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:36.877552   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:36.879048   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:37.202928   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:37.705011   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:37.705817   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:37.805001   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:37.877845   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:37.878670   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:38.202757   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:38.376154   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:38.377838   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:38.702307   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:38.876322   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:38.877620   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:39.202800   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:39.360081   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:39.379163   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:39.379399   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:39.705932   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:39.877729   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:39.878131   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:40.203308   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:40.378387   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:40.379461   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:40.702164   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:40.876725   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:40.878924   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:41.203040   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:41.373698   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:41.381258   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:41.382052   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:41.703456   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:41.877486   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:41.877848   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:42.201909   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:42.376697   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:42.378060   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:42.702978   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:42.877088   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:42.878539   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:43.203139   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:43.376012   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:43.377415   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:43.702481   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:43.863118   83954 pod_ready.go:103] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:43.880846   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:43.881316   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:44.203227   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:44.376191   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:44.377140   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:44.701466   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:44.876588   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:44.877645   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:45.202470   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:45.377382   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:45.378903   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:46.048720   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:46.049542   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:46.049835   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:46.062936   83954 pod_ready.go:93] pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:46.062970   83954 pod_ready.go:82] duration metric: took 24.709249966s for pod "coredns-7c65d6cfc9-q28v2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.062984   83954 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.082376   83954 pod_ready.go:93] pod "etcd-addons-545460" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:46.082400   83954 pod_ready.go:82] duration metric: took 19.407231ms for pod "etcd-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.082411   83954 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.089227   83954 pod_ready.go:93] pod "kube-apiserver-addons-545460" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:46.089254   83954 pod_ready.go:82] duration metric: took 6.834352ms for pod "kube-apiserver-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.089267   83954 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.108472   83954 pod_ready.go:93] pod "kube-controller-manager-addons-545460" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:46.108506   83954 pod_ready.go:82] duration metric: took 19.230119ms for pod "kube-controller-manager-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.108522   83954 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tg4qm" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.117036   83954 pod_ready.go:93] pod "kube-proxy-tg4qm" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:46.117059   83954 pod_ready.go:82] duration metric: took 8.529228ms for pod "kube-proxy-tg4qm" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.117071   83954 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.203566   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:46.257718   83954 pod_ready.go:93] pod "kube-scheduler-addons-545460" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:46.257748   83954 pod_ready.go:82] duration metric: took 140.667778ms for pod "kube-scheduler-addons-545460" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.257759   83954 pod_ready.go:39] duration metric: took 37.932603609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:49:46.257787   83954 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:49:46.257867   83954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:46.279605   83954 api_server.go:72] duration metric: took 39.073051539s to wait for apiserver process to appear ...
	I0920 17:49:46.279639   83954 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:49:46.279681   83954 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0920 17:49:46.284762   83954 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0920 17:49:46.285923   83954 api_server.go:141] control plane version: v1.31.1
	I0920 17:49:46.285944   83954 api_server.go:131] duration metric: took 6.299331ms to wait for apiserver health ...
	I0920 17:49:46.285953   83954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:49:46.377090   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:46.377980   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:46.463599   83954 system_pods.go:59] 17 kube-system pods found
	I0920 17:49:46.463635   83954 system_pods.go:61] "coredns-7c65d6cfc9-q28v2" [e8b79058-6f69-4b08-a199-dca458522614] Running
	I0920 17:49:46.463645   83954 system_pods.go:61] "csi-hostpath-attacher-0" [cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:49:46.463659   83954 system_pods.go:61] "csi-hostpath-resizer-0" [04f5dbfc-2785-4a68-8a63-e6950bf516f4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:49:46.463669   83954 system_pods.go:61] "csi-hostpathplugin-vwjf7" [86e3f9ed-2d73-428c-8f14-7bf8ce94b96e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:49:46.463677   83954 system_pods.go:61] "etcd-addons-545460" [79c710a2-51bc-4e34-8aa9-03c521c40b83] Running
	I0920 17:49:46.463682   83954 system_pods.go:61] "kube-apiserver-addons-545460" [f7805ed9-1409-42b2-9f13-8cd5f36c1248] Running
	I0920 17:49:46.463688   83954 system_pods.go:61] "kube-controller-manager-addons-545460" [7ea02ac9-b574-49df-b5e7-6e2ed685c401] Running
	I0920 17:49:46.463695   83954 system_pods.go:61] "kube-ingress-dns-minikube" [c12802a6-4672-4501-b1f5-43242cc8bea6] Running
	I0920 17:49:46.463701   83954 system_pods.go:61] "kube-proxy-tg4qm" [02bfdb2c-abde-48ab-aebe-5d4583f30b32] Running
	I0920 17:49:46.463709   83954 system_pods.go:61] "kube-scheduler-addons-545460" [bf1da34d-e5a5-4fdd-8b84-61b91a003c32] Running
	I0920 17:49:46.463720   83954 system_pods.go:61] "metrics-server-84c5f94fbc-klw4f" [797d18c7-0814-4734-bef9-1573075bab38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:49:46.463729   83954 system_pods.go:61] "nvidia-device-plugin-daemonset-vk68k" [df7a9429-e3d3-4ec6-8ae2-ff9a97192f55] Running
	I0920 17:49:46.463735   83954 system_pods.go:61] "registry-66c9cd494c-fbp5k" [ff9004d3-6b62-4519-a372-a121accd4feb] Running
	I0920 17:49:46.463745   83954 system_pods.go:61] "registry-proxy-cpcfm" [8b08dbe7-93bf-426c-bf4d-ed3ea87b2d2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:49:46.463758   83954 system_pods.go:61] "snapshot-controller-56fcc65765-4mm4j" [d4ab4d41-a2c2-4ef5-a0b6-0f1a68c82f21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:46.463766   83954 system_pods.go:61] "snapshot-controller-56fcc65765-xx9kz" [9a4244ad-655b-4ff7-af19-7f1e967db631] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:46.463774   83954 system_pods.go:61] "storage-provisioner" [1062d573-c7b9-43cf-9cb4-af397ff09ed9] Running
	I0920 17:49:46.463787   83954 system_pods.go:74] duration metric: took 177.826527ms to wait for pod list to return data ...
	I0920 17:49:46.463801   83954 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:49:46.657937   83954 default_sa.go:45] found service account: "default"
	I0920 17:49:46.657965   83954 default_sa.go:55] duration metric: took 194.153577ms for default service account to be created ...
	I0920 17:49:46.657978   83954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:49:46.771084   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:46.875913   83954 system_pods.go:86] 17 kube-system pods found
	I0920 17:49:46.875941   83954 system_pods.go:89] "coredns-7c65d6cfc9-q28v2" [e8b79058-6f69-4b08-a199-dca458522614] Running
	I0920 17:49:46.875951   83954 system_pods.go:89] "csi-hostpath-attacher-0" [cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:49:46.875958   83954 system_pods.go:89] "csi-hostpath-resizer-0" [04f5dbfc-2785-4a68-8a63-e6950bf516f4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:49:46.875966   83954 system_pods.go:89] "csi-hostpathplugin-vwjf7" [86e3f9ed-2d73-428c-8f14-7bf8ce94b96e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:49:46.875970   83954 system_pods.go:89] "etcd-addons-545460" [79c710a2-51bc-4e34-8aa9-03c521c40b83] Running
	I0920 17:49:46.875974   83954 system_pods.go:89] "kube-apiserver-addons-545460" [f7805ed9-1409-42b2-9f13-8cd5f36c1248] Running
	I0920 17:49:46.875978   83954 system_pods.go:89] "kube-controller-manager-addons-545460" [7ea02ac9-b574-49df-b5e7-6e2ed685c401] Running
	I0920 17:49:46.875983   83954 system_pods.go:89] "kube-ingress-dns-minikube" [c12802a6-4672-4501-b1f5-43242cc8bea6] Running
	I0920 17:49:46.875986   83954 system_pods.go:89] "kube-proxy-tg4qm" [02bfdb2c-abde-48ab-aebe-5d4583f30b32] Running
	I0920 17:49:46.875989   83954 system_pods.go:89] "kube-scheduler-addons-545460" [bf1da34d-e5a5-4fdd-8b84-61b91a003c32] Running
	I0920 17:49:46.875994   83954 system_pods.go:89] "metrics-server-84c5f94fbc-klw4f" [797d18c7-0814-4734-bef9-1573075bab38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:49:46.875997   83954 system_pods.go:89] "nvidia-device-plugin-daemonset-vk68k" [df7a9429-e3d3-4ec6-8ae2-ff9a97192f55] Running
	I0920 17:49:46.876001   83954 system_pods.go:89] "registry-66c9cd494c-fbp5k" [ff9004d3-6b62-4519-a372-a121accd4feb] Running
	I0920 17:49:46.876007   83954 system_pods.go:89] "registry-proxy-cpcfm" [8b08dbe7-93bf-426c-bf4d-ed3ea87b2d2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:49:46.876014   83954 system_pods.go:89] "snapshot-controller-56fcc65765-4mm4j" [d4ab4d41-a2c2-4ef5-a0b6-0f1a68c82f21] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:46.876019   83954 system_pods.go:89] "snapshot-controller-56fcc65765-xx9kz" [9a4244ad-655b-4ff7-af19-7f1e967db631] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:46.876023   83954 system_pods.go:89] "storage-provisioner" [1062d573-c7b9-43cf-9cb4-af397ff09ed9] Running
	I0920 17:49:46.876031   83954 system_pods.go:126] duration metric: took 218.0464ms to wait for k8s-apps to be running ...
	I0920 17:49:46.876048   83954 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:49:46.876092   83954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:49:46.880195   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:46.880817   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:46.897612   83954 system_svc.go:56] duration metric: took 21.555244ms WaitForService to wait for kubelet
	I0920 17:49:46.897639   83954 kubeadm.go:582] duration metric: took 39.691090668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:49:46.897666   83954 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:49:47.057755   83954 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 17:49:47.057780   83954 node_conditions.go:123] node cpu capacity is 2
	I0920 17:49:47.057791   83954 node_conditions.go:105] duration metric: took 160.120066ms to run NodePressure ...
	I0920 17:49:47.057804   83954 start.go:241] waiting for startup goroutines ...
	I0920 17:49:47.202735   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:47.376859   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:47.378220   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:47.703429   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:47.877670   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:47.879291   83954 kapi.go:107] duration metric: took 28.005267949s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:49:48.202347   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:48.448996   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:48.712555   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:48.878639   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:49.204088   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:49.377035   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:49.703001   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:49.880367   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:50.202630   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:50.377035   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:50.702893   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:50.879170   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:51.202748   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:51.377448   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:51.702627   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:51.876596   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:52.202002   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:52.378446   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:52.702425   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:52.876968   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:53.201655   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:53.378180   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:53.835263   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:53.934511   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:54.203664   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:54.376325   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:54.701818   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:54.877690   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:55.203357   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:55.377859   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:55.702456   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:55.877216   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:56.203174   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:56.377509   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:56.702754   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:56.877248   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:57.202165   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:57.377068   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:57.856993   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:57.876876   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:58.202245   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:58.378228   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:58.705938   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:58.877866   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:59.202706   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:59.377369   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:49:59.703194   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:59.880451   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:00.202917   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:00.376857   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:00.702814   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:00.876549   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:01.205070   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:01.377488   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:01.702527   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:01.877201   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:02.202903   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:02.376303   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:02.703265   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:02.879159   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:03.215052   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:03.376938   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:03.703348   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:03.877208   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:04.203071   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:04.376843   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:04.704733   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:04.878972   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:05.203277   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:05.377576   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:05.703407   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:05.878448   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:06.214835   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:06.377007   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:06.702233   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:06.877346   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:07.203044   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:07.377857   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:07.701865   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:07.877052   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:08.202592   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:08.376408   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:08.822100   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:08.880130   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:09.203153   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:09.377192   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:09.702665   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:09.877475   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:10.203407   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:10.377651   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:10.703030   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:10.878074   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:11.221367   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:11.377618   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:11.704698   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:11.877347   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:12.203676   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:12.376010   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:12.703192   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:12.876909   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:13.202592   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:13.377046   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:13.702356   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:13.877997   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:14.210427   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:14.377070   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:14.708008   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:14.877109   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:15.202780   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:15.376918   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:15.703348   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:15.877896   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:16.204222   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:16.388191   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:16.707394   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:16.878840   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:17.201815   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:17.392024   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:17.703404   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:17.877089   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:18.202834   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:18.376619   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:18.702993   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:18.885303   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:19.203038   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:19.379598   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:19.712100   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:19.878521   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:20.203322   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:20.376291   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:20.702275   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:20.877408   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:21.202535   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:21.441167   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:21.705166   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:21.876827   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:22.202982   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:22.377171   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:22.702750   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:22.882325   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:23.203941   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:23.378788   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:23.702411   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:23.876654   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:24.203428   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:24.377132   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:24.704365   83954 kapi.go:107] duration metric: took 1m4.506508953s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:50:24.877669   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:25.377321   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:25.966817   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:26.377690   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:26.876698   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:27.377664   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:27.876998   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:28.377219   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:28.876181   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:29.377034   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:29.880244   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:30.377432   83954 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:50:30.877359   83954 kapi.go:107] duration metric: took 1m11.004790647s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:50:45.505100   83954 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:50:45.505124   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:45.992672   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:46.490921   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:46.994528   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:47.491508   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:47.992534   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:48.491313   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:48.992379   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:49.492102   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:49.992293   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:50.492080   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:50.991579   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:51.491383   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:51.992456   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:52.491340   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:52.991334   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:53.491942   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:53.991347   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:54.491644   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:54.991462   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:55.492170   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:55.991709   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:56.491061   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:56.994534   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:57.491110   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:57.991233   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:58.491966   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:58.991970   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:59.491188   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:59.992563   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:00.491481   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:00.990781   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:01.491589   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:01.991954   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:02.491640   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:02.994657   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:03.491082   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:03.991447   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:04.490645   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:04.991120   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:05.491614   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:05.991354   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:06.491704   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:06.991304   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:07.491287   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:07.992100   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:08.491096   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:08.992209   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:09.491906   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:09.993843   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:10.491096   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:10.992193   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:11.491863   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:11.991914   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:12.491086   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:12.992375   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:13.492313   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:13.992182   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:14.491822   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:14.991809   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:15.491783   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:15.991464   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:16.491183   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:16.991999   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:17.493513   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:17.991779   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:18.490935   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:18.992721   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:19.491290   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:19.992035   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:20.492248   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:20.992421   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:21.491533   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:21.991281   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:22.498292   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:22.991788   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:23.490909   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:23.991421   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:24.492210   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:24.991679   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:25.491282   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:25.991771   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:26.491731   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:26.991404   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:27.491508   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:27.991692   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:28.490641   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:28.991451   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:29.490601   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:29.991151   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:30.491930   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:30.991177   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:31.491668   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:31.993813   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:32.491022   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:32.991860   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:33.491500   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:33.991227   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:34.491978   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:34.992030   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:35.491801   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:35.991137   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:36.491213   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:36.992488   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:37.491040   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:37.992587   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:38.490944   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:38.991848   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:39.491367   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:39.991651   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:40.491957   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:40.992677   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:41.491456   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:41.991185   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:42.491585   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:42.992092   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:43.491530   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:43.990697   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:44.492159   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:44.994611   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:45.492541   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:45.992721   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:46.491187   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:46.992020   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:47.491640   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:47.991391   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:48.492347   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:48.992495   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:49.491719   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:49.991635   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:50.491445   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:50.992516   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:51.491984   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:51.992124   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:52.491886   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:52.991905   83954 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:53.491568   83954 kapi.go:107] duration metric: took 2m31.003980763s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:51:53.493450   83954 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-545460 cluster.
	I0920 17:51:53.494710   83954 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:51:53.496000   83954 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:51:53.497408   83954 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, volcano, nvidia-device-plugin, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 17:51:53.498592   83954 addons.go:510] duration metric: took 2m46.292017975s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner inspektor-gadget metrics-server volcano nvidia-device-plugin yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 17:51:53.498642   83954 start.go:246] waiting for cluster config update ...
	I0920 17:51:53.498670   83954 start.go:255] writing updated cluster config ...
	I0920 17:51:53.499018   83954 ssh_runner.go:195] Run: rm -f paused
	I0920 17:51:53.550462   83954 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:51:53.552257   83954 out.go:177] * Done! kubectl is now configured to use "addons-545460" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.691516900Z" level=info msg="shim disconnected" id=d9cd4baa85c05708cb848c94242d5c45586510ba68476bd3c63a0a8c551f19f4 namespace=moby
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.691686658Z" level=warning msg="cleaning up after shim disconnected" id=d9cd4baa85c05708cb848c94242d5c45586510ba68476bd3c63a0a8c551f19f4 namespace=moby
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.691702835Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 18:01:48 addons-545460 dockerd[1202]: time="2024-09-20T18:01:48.692624628Z" level=info msg="ignoring event" container=d9cd4baa85c05708cb848c94242d5c45586510ba68476bd3c63a0a8c551f19f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.870227851Z" level=info msg="shim disconnected" id=701842c6d7c0d0a21024aaaf2da165aecdc3f65339f668c24e07b3be15d5f1a1 namespace=moby
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.870430582Z" level=warning msg="cleaning up after shim disconnected" id=701842c6d7c0d0a21024aaaf2da165aecdc3f65339f668c24e07b3be15d5f1a1 namespace=moby
	Sep 20 18:01:48 addons-545460 dockerd[1202]: time="2024-09-20T18:01:48.870708148Z" level=info msg="ignoring event" container=701842c6d7c0d0a21024aaaf2da165aecdc3f65339f668c24e07b3be15d5f1a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.872174797Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 18:01:48 addons-545460 dockerd[1208]: time="2024-09-20T18:01:48.905494003Z" level=warning msg="cleanup warnings time=\"2024-09-20T18:01:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1202]: time="2024-09-20T18:01:49.160445460Z" level=info msg="ignoring event" container=57f757fc66c4e45157971068112dfea267638ce0bdafc2bd3b271a23fed58501 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.162213582Z" level=info msg="shim disconnected" id=57f757fc66c4e45157971068112dfea267638ce0bdafc2bd3b271a23fed58501 namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.162622846Z" level=warning msg="cleaning up after shim disconnected" id=57f757fc66c4e45157971068112dfea267638ce0bdafc2bd3b271a23fed58501 namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.162780778Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1202]: time="2024-09-20T18:01:49.655420132Z" level=info msg="ignoring event" container=f203dc04709a19d9bb505b4a4fcb08fefcd363574fb168273a194486732b377a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.656998742Z" level=info msg="shim disconnected" id=f203dc04709a19d9bb505b4a4fcb08fefcd363574fb168273a194486732b377a namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.657067506Z" level=warning msg="cleaning up after shim disconnected" id=f203dc04709a19d9bb505b4a4fcb08fefcd363574fb168273a194486732b377a namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.657077807Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1202]: time="2024-09-20T18:01:49.756412258Z" level=info msg="ignoring event" container=3113dd4cea1875f6f785e4ef4213df69d4eaa072b692e2a4902a0a3c94797bf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.761176351Z" level=info msg="shim disconnected" id=3113dd4cea1875f6f785e4ef4213df69d4eaa072b692e2a4902a0a3c94797bf8 namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.761235645Z" level=warning msg="cleaning up after shim disconnected" id=3113dd4cea1875f6f785e4ef4213df69d4eaa072b692e2a4902a0a3c94797bf8 namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.761245753Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1202]: time="2024-09-20T18:01:49.770820576Z" level=info msg="ignoring event" container=1c2953efd6a46257a85db302bc59ce1771884dd58152be00a12268e43bf0fbfa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.778105874Z" level=info msg="shim disconnected" id=1c2953efd6a46257a85db302bc59ce1771884dd58152be00a12268e43bf0fbfa namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.778233059Z" level=warning msg="cleaning up after shim disconnected" id=1c2953efd6a46257a85db302bc59ce1771884dd58152be00a12268e43bf0fbfa namespace=moby
	Sep 20 18:01:49 addons-545460 dockerd[1208]: time="2024-09-20T18:01:49.778348480Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	11973a62284c1       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                                  35 seconds ago      Running             hello-world-app                          0                   8cb954bdb2b65       hello-world-app-55bf9c44b4-vmz47
	94362302103eb       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                                45 seconds ago      Running             nginx                                    0                   311abf3edea6c       nginx
	2cf15166d39d5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   12fd049896a15       gcp-auth-89d5ffd79-fqlqd
	1630a3e4f7b05       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   adeffb60aa104       csi-hostpathplugin-vwjf7
	7f9668891492c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   adeffb60aa104       csi-hostpathplugin-vwjf7
	49d7cb8315d2d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   adeffb60aa104       csi-hostpathplugin-vwjf7
	a4642a1d7dee4       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   adeffb60aa104       csi-hostpathplugin-vwjf7
	3113dd4cea187       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   adeffb60aa104       csi-hostpathplugin-vwjf7
	1c2953efd6a46       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   adeffb60aa104       csi-hostpathplugin-vwjf7
	06113105bbb90       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   b18a484bb8353       csi-hostpath-resizer-0
	f203dc04709a1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   5c26647b48385       csi-hostpath-attacher-0
	4090539a3a7f7       ce263a8653f9c                                                                                                                                11 minutes ago      Exited              patch                                    1                   3ec805630db62       ingress-nginx-admission-patch-2tntt
	dc82e4d1e2c16       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   e56c5c75bba73       snapshot-controller-56fcc65765-4mm4j
	1516126da1a0e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   3c97cfd25b06b       ingress-nginx-admission-create-f5zm8
	0aba4231cd48d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   8d6543d49561b       local-path-provisioner-86d989889c-z5bzs
	34ed71b20cab2       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   8568abc0a926d       snapshot-controller-56fcc65765-xx9kz
	e632201e135a5       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   083658e5003c8       storage-provisioner
	160383ad207af       c69fa2e9cbf5f                                                                                                                                12 minutes ago      Running             coredns                                  0                   6904532c12e82       coredns-7c65d6cfc9-q28v2
	e29030aaeb2b3       60c005f310ff3                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   1ef6517b65571       kube-proxy-tg4qm
	74358290403f1       9aa1fad941575                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   1acbafd6edee3       kube-scheduler-addons-545460
	3e00ead777b6f       175ffd71cce3d                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   257a7c3902bce       kube-controller-manager-addons-545460
	f0e4568f5baf1       6bab7719df100                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   7e8e018afa359       kube-apiserver-addons-545460
	a129dae3a19b3       2e96e5913fc06                                                                                                                                12 minutes ago      Running             etcd                                     0                   f66804c20c8b9       etcd-addons-545460
	
	
	==> coredns [160383ad207a] <==
	[INFO] 127.0.0.1:36645 - 40807 "HINFO IN 2876761972086547065.5795734700635138165. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036276139s
	[INFO] 10.244.0.7:55068 - 48823 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000264337s
	[INFO] 10.244.0.7:55068 - 44730 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00055705s
	[INFO] 10.244.0.7:51888 - 60106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085007s
	[INFO] 10.244.0.7:51888 - 16328 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050595s
	[INFO] 10.244.0.7:51963 - 1949 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145567s
	[INFO] 10.244.0.7:51963 - 16031 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149993s
	[INFO] 10.244.0.7:58525 - 33715 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000094296s
	[INFO] 10.244.0.7:58525 - 64685 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000059602s
	[INFO] 10.244.0.7:48367 - 37402 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000078589s
	[INFO] 10.244.0.7:48367 - 41501 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008578s
	[INFO] 10.244.0.7:40480 - 51439 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055325s
	[INFO] 10.244.0.7:40480 - 59113 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111109s
	[INFO] 10.244.0.7:59451 - 14431 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114304s
	[INFO] 10.244.0.7:59451 - 14937 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000139664s
	[INFO] 10.244.0.7:33039 - 18212 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124971s
	[INFO] 10.244.0.7:33039 - 42535 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104805s
	[INFO] 10.244.0.25:45520 - 54893 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000580907s
	[INFO] 10.244.0.25:38305 - 44709 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000091744s
	[INFO] 10.244.0.25:38433 - 35085 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137645s
	[INFO] 10.244.0.25:37039 - 46062 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000344957s
	[INFO] 10.244.0.25:51630 - 11544 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012768s
	[INFO] 10.244.0.25:53311 - 197 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096643s
	[INFO] 10.244.0.25:42030 - 50027 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001794936s
	[INFO] 10.244.0.25:58679 - 37535 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002419036s
	
	
	==> describe nodes <==
	Name:               addons-545460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-545460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-545460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_49_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-545460
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-545460"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:48:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-545460
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:01:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:01:37 +0000   Fri, 20 Sep 2024 17:48:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:01:37 +0000   Fri, 20 Sep 2024 17:48:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:01:37 +0000   Fri, 20 Sep 2024 17:48:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:01:37 +0000   Fri, 20 Sep 2024 17:49:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    addons-545460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 838aead9b13144898bb993390dda86f6
	  System UUID:                838aead9-b131-4489-8bb9-93390dda86f6
	  Boot ID:                    137c2d04-a606-42aa-ad17-86a8efdc42e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     hello-world-app-55bf9c44b4-vmz47           0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  gcp-auth                    gcp-auth-89d5ffd79-fqlqd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-q28v2                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-vwjf7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-545460                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-545460               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-545460      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tg4qm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-545460               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-4mm4j       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-xx9kz       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-z5bzs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-545460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-545460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-545460 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-545460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-545460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-545460 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-545460 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-545460 event: Registered Node addons-545460 in Controller
	
	
	==> dmesg <==
	[  +7.822693] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.152909] kauditd_printk_skb: 29 callbacks suppressed
	[Sep20 17:50] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.851376] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.052162] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.294271] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.627350] kauditd_printk_skb: 32 callbacks suppressed
	[ +11.103402] kauditd_printk_skb: 42 callbacks suppressed
	[Sep20 17:51] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.859467] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.207642] kauditd_printk_skb: 9 callbacks suppressed
	[Sep20 17:52] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.495486] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.011053] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.127358] kauditd_printk_skb: 21 callbacks suppressed
	[Sep20 17:56] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 18:00] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.330526] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.426125] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.010798] kauditd_printk_skb: 51 callbacks suppressed
	[Sep20 18:01] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.663813] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.388403] kauditd_printk_skb: 36 callbacks suppressed
	[ +21.645276] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.382190] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [a129dae3a19b] <==
	{"level":"info","ts":"2024-09-20T17:50:11.098537Z","caller":"traceutil/trace.go:171","msg":"trace[496607524] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1170; }","duration":"126.079409ms","start":"2024-09-20T17:50:10.972443Z","end":"2024-09-20T17:50:11.098523Z","steps":["trace[496607524] 'range keys from in-memory index tree'  (duration: 125.913584ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:50:14.304363Z","caller":"traceutil/trace.go:171","msg":"trace[1360159608] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"111.921792ms","start":"2024-09-20T17:50:14.192426Z","end":"2024-09-20T17:50:14.304348Z","steps":["trace[1360159608] 'process raft request'  (duration: 107.296932ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:50:25.843207Z","caller":"traceutil/trace.go:171","msg":"trace[818725232] linearizableReadLoop","detail":"{readStateIndex:1281; appliedIndex:1280; }","duration":"371.56092ms","start":"2024-09-20T17:50:25.471610Z","end":"2024-09-20T17:50:25.843171Z","steps":["trace[818725232] 'read index received'  (duration: 371.404857ms)","trace[818725232] 'applied index is now lower than readState.Index'  (duration: 155.642µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:50:25.843567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"371.921211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:50:25.843651Z","caller":"traceutil/trace.go:171","msg":"trace[1303477186] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1247; }","duration":"372.005311ms","start":"2024-09-20T17:50:25.471607Z","end":"2024-09-20T17:50:25.843612Z","steps":["trace[1303477186] 'agreement among raft nodes before linearized reading'  (duration: 371.876778ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:50:25.843688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.668402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-09-20T17:50:25.843686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:50:25.471579Z","time spent":"372.09358ms","remote":"127.0.0.1:55098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-20T17:50:25.843724Z","caller":"traceutil/trace.go:171","msg":"trace[1226982228] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1247; }","duration":"333.705268ms","start":"2024-09-20T17:50:25.510003Z","end":"2024-09-20T17:50:25.843708Z","steps":["trace[1226982228] 'agreement among raft nodes before linearized reading'  (duration: 333.648712ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:50:25.843745Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:50:25.509967Z","time spent":"333.772097ms","remote":"127.0.0.1:55196","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":13,"response size":29,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	{"level":"info","ts":"2024-09-20T17:50:25.843885Z","caller":"traceutil/trace.go:171","msg":"trace[27803121] transaction","detail":"{read_only:false; response_revision:1247; number_of_response:1; }","duration":"373.496737ms","start":"2024-09-20T17:50:25.470379Z","end":"2024-09-20T17:50:25.843876Z","steps":["trace[27803121] 'process raft request'  (duration: 372.690631ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:50:25.843918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.430288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:50:25.843935Z","caller":"traceutil/trace.go:171","msg":"trace[2091775516] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1247; }","duration":"226.449467ms","start":"2024-09-20T17:50:25.617480Z","end":"2024-09-20T17:50:25.843930Z","steps":["trace[2091775516] 'agreement among raft nodes before linearized reading'  (duration: 226.413458ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:50:25.843965Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T17:50:25.470352Z","time spent":"373.550194ms","remote":"127.0.0.1:54980","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":902,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f70513422aa3b6\" mod_revision:1069 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f70513422aa3b6\" value_size:820 lease:8360811833021307808 >> failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f70513422aa3b6\" > >"}
	{"level":"warn","ts":"2024-09-20T17:50:25.844010Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.894255ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:50:25.844022Z","caller":"traceutil/trace.go:171","msg":"trace[1925438515] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1247; }","duration":"275.907096ms","start":"2024-09-20T17:50:25.568111Z","end":"2024-09-20T17:50:25.844018Z","steps":["trace[1925438515] 'agreement among raft nodes before linearized reading'  (duration: 275.888471ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:52:16.315111Z","caller":"traceutil/trace.go:171","msg":"trace[1715008817] linearizableReadLoop","detail":"{readStateIndex:1627; appliedIndex:1626; }","duration":"259.47357ms","start":"2024-09-20T17:52:16.055575Z","end":"2024-09-20T17:52:16.315049Z","steps":["trace[1715008817] 'read index received'  (duration: 257.518318ms)","trace[1715008817] 'applied index is now lower than readState.Index'  (duration: 1.954509ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T17:52:16.315364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.717687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T17:52:16.315393Z","caller":"traceutil/trace.go:171","msg":"trace[330037749] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1566; }","duration":"259.813224ms","start":"2024-09-20T17:52:16.055571Z","end":"2024-09-20T17:52:16.315384Z","steps":["trace[330037749] 'agreement among raft nodes before linearized reading'  (duration: 259.690494ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:52:16.315574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.145658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-20T17:52:16.315591Z","caller":"traceutil/trace.go:171","msg":"trace[326332479] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1566; }","duration":"164.165456ms","start":"2024-09-20T17:52:16.151421Z","end":"2024-09-20T17:52:16.315586Z","steps":["trace[326332479] 'agreement among raft nodes before linearized reading'  (duration: 164.128238ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T17:52:17.902414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.753518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T17:52:17.902483Z","caller":"traceutil/trace.go:171","msg":"trace[1093021613] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1569; }","duration":"282.831621ms","start":"2024-09-20T17:52:17.619640Z","end":"2024-09-20T17:52:17.902472Z","steps":["trace[1093021613] 'range keys from in-memory index tree'  (duration: 282.702199ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T17:58:57.989681Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1878}
	{"level":"info","ts":"2024-09-20T17:58:58.086674Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1878,"took":"95.98093ms","hash":119684519,"current-db-size-bytes":8753152,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4907008,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-20T17:58:58.086752Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":119684519,"revision":1878,"compact-revision":-1}
	
	
	==> gcp-auth [2cf15166d39d] <==
	2024/09/20 17:52:34 Ready to write response ...
	2024/09/20 17:52:34 Ready to marshal response ...
	2024/09/20 17:52:34 Ready to write response ...
	2024/09/20 18:00:37 Ready to marshal response ...
	2024/09/20 18:00:37 Ready to write response ...
	2024/09/20 18:00:37 Ready to marshal response ...
	2024/09/20 18:00:37 Ready to write response ...
	2024/09/20 18:00:37 Ready to marshal response ...
	2024/09/20 18:00:37 Ready to write response ...
	2024/09/20 18:00:37 Ready to marshal response ...
	2024/09/20 18:00:37 Ready to write response ...
	2024/09/20 18:00:37 Ready to marshal response ...
	2024/09/20 18:00:37 Ready to write response ...
	2024/09/20 18:00:47 Ready to marshal response ...
	2024/09/20 18:00:47 Ready to write response ...
	2024/09/20 18:00:48 Ready to marshal response ...
	2024/09/20 18:00:48 Ready to write response ...
	2024/09/20 18:01:01 Ready to marshal response ...
	2024/09/20 18:01:01 Ready to write response ...
	2024/09/20 18:01:12 Ready to marshal response ...
	2024/09/20 18:01:12 Ready to write response ...
	2024/09/20 18:01:13 Ready to marshal response ...
	2024/09/20 18:01:13 Ready to write response ...
	2024/09/20 18:01:39 Ready to marshal response ...
	2024/09/20 18:01:39 Ready to write response ...
	
	
	==> kernel <==
	 18:01:50 up 13 min,  0 users,  load average: 0.58, 0.66, 0.59
	Linux addons-545460 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f0e4568f5baf] <==
	I0920 17:52:09.895325       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 17:52:24.589222       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 17:52:24.684339       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	E0920 17:52:25.151937       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
	I0920 17:52:25.332458       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:52:25.380387       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:52:25.446243       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 17:52:25.483633       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:52:25.787318       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:52:25.804717       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:52:25.888125       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 17:52:25.952958       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 17:52:26.310538       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 17:52:26.492708       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 17:52:26.538404       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 17:52:26.589420       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 17:52:26.889243       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 17:52:27.279561       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 18:00:37.751445       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.159.244"}
	I0920 18:01:00.887892       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:01:01.064410       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.210.29"}
	I0920 18:01:02.028234       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:01:03.070919       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:01:12.647602       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.174.24"}
	I0920 18:01:20.927602       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [3e00ead777b6] <==
	I0920 18:01:14.464167       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.623303ms"
	I0920 18:01:14.464338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="119.131µs"
	I0920 18:01:15.039129       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 18:01:15.046486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.279µs"
	I0920 18:01:15.049113       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0920 18:01:17.250952       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:17.251320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:01:18.203877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:18.203945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:01:22.559125       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:22.559160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:01:25.096655       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0920 18:01:31.096895       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:31.096958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:01:35.776934       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:35.777141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:01:37.155197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-545460"
	W0920 18:01:40.590327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:40.590388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:01:44.626783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:44.626931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:01:48.539689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.372µs"
	I0920 18:01:49.368329       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 18:01:49.559718       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0920 18:01:50.317568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-545460"
	
	
	==> kube-proxy [e29030aaeb2b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 17:49:08.972713       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 17:49:09.004532       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.174"]
	E0920 17:49:09.004648       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:49:09.092801       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 17:49:09.092882       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 17:49:09.092924       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:49:09.095948       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:49:09.096173       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:49:09.096183       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:49:09.098719       1 config.go:199] "Starting service config controller"
	I0920 17:49:09.098785       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:49:09.098815       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:49:09.098819       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:49:09.099530       1 config.go:328] "Starting node config controller"
	I0920 17:49:09.099540       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:49:09.199450       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:49:09.199495       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:49:09.199672       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [74358290403f] <==
	W0920 17:48:59.345088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:48:59.346058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:48:59.346173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:48:59.346303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:48:59.346343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:48:59.346559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:48:59.346567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:48:59.347234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:48:59.346473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:48:59.347488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:48:59.345292       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:48:59.347919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:48:59.345334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:48:59.348195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:00.224017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:49:00.224365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:00.269343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:49:00.269641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:00.305553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:49:00.305818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:00.529172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:49:00.529204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:00.742942       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:49:00.743179       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 17:49:03.928936       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.590024    1978 scope.go:117] "RemoveContainer" containerID="7f9668891492cac35a1b996ccea35e10187f67b7022f528e3250fb8d52d628af"
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.612905    1978 scope.go:117] "RemoveContainer" containerID="49d7cb8315d2d35dacb78cda0e5f4ae0b550c00aabf5c3be3d9e543a9195f338"
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622284    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-dev-dir\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622386    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-csi-data-dir\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622405    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-mountpoint-dir\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622487    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8bvh\" (UniqueName: \"kubernetes.io/projected/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-kube-api-access-n8bvh\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622505    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-socket-dir\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622656    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-mountpoint-dir" (OuterVolumeSpecName: "mountpoint-dir") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "mountpoint-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622730    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-dev-dir" (OuterVolumeSpecName: "dev-dir") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "dev-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622744    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-csi-data-dir" (OuterVolumeSpecName: "csi-data-dir") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "csi-data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622804    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxdk\" (UniqueName: \"kubernetes.io/projected/cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5-kube-api-access-mnxdk\") pod \"cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5\" (UID: \"cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622877    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-plugins-dir\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622897    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5-socket-dir\") pod \"cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5\" (UID: \"cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622921    1978 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-registration-dir\") pod \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\" (UID: \"86e3f9ed-2d73-428c-8f14-7bf8ce94b96e\") "
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622984    1978 reconciler_common.go:288] "Volume detached for volume \"dev-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-dev-dir\") on node \"addons-545460\" DevicePath \"\""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.622993    1978 reconciler_common.go:288] "Volume detached for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-csi-data-dir\") on node \"addons-545460\" DevicePath \"\""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.623002    1978 reconciler_common.go:288] "Volume detached for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-mountpoint-dir\") on node \"addons-545460\" DevicePath \"\""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.623027    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-registration-dir" (OuterVolumeSpecName: "registration-dir") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "registration-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.623047    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-socket-dir" (OuterVolumeSpecName: "socket-dir") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "socket-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.624569    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-plugins-dir" (OuterVolumeSpecName: "plugins-dir") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "plugins-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.624639    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5-socket-dir" (OuterVolumeSpecName: "socket-dir") pod "cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5" (UID: "cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5"). InnerVolumeSpecName "socket-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.627240    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e3f9ed-2d73-428c-8f14-7bf8ce94b96e-kube-api-access-n8bvh" (OuterVolumeSpecName: "kube-api-access-n8bvh") pod "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e" (UID: "86e3f9ed-2d73-428c-8f14-7bf8ce94b96e"). InnerVolumeSpecName "kube-api-access-n8bvh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.627335    1978 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5-kube-api-access-mnxdk" (OuterVolumeSpecName: "kube-api-access-mnxdk") pod "cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5" (UID: "cd96d10f-09c5-4ef8-b2ef-1ce458c4a0d5"). InnerVolumeSpecName "kube-api-access-mnxdk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.642894    1978 scope.go:117] "RemoveContainer" containerID="a4642a1d7dee4ce5d6cc479deddb0c8df1ff38bb47d9e16fb68a91ebd96e17f4"
	Sep 20 18:01:50 addons-545460 kubelet[1978]: I0920 18:01:50.673130    1978 scope.go:117] "RemoveContainer" containerID="3113dd4cea1875f6f785e4ef4213df69d4eaa072b692e2a4902a0a3c94797bf8"
	
	
	==> storage-provisioner [e632201e135a] <==
	I0920 17:49:18.103102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:49:19.063070       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:49:19.063143       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:49:19.121065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:49:19.122491       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-545460_a7bc6941-8206-4ca3-8040-78d8722f39ef!
	I0920 17:49:19.124762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"002709b5-4911-4209-b826-d5ad884b43fd", APIVersion:"v1", ResourceVersion:"854", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-545460_a7bc6941-8206-4ca3-8040-78d8722f39ef became leader
	I0920 17:49:19.332002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-545460_a7bc6941-8206-4ca3-8040-78d8722f39ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-545460 -n addons-545460
helpers_test.go:261: (dbg) Run:  kubectl --context addons-545460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-vwjf7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-545460 describe pod busybox csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-vwjf7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-545460 describe pod busybox csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-vwjf7: exit status 1 (77.758209ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-545460/192.168.39.174
	Start Time:       Fri, 20 Sep 2024 17:52:34 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5nb7z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5nb7z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                    From               Message
	  ----     ------          ----                   ----               -------
	  Normal   Scheduled       9m17s                  default-scheduler  Successfully assigned default/busybox to addons-545460
	  Normal   SandboxChanged  9m15s                  kubelet            Pod sandbox changed, it will be killed and re-created.
	  Normal   Pulling         7m51s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m51s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m51s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed          7m38s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff         4m5s (x22 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "csi-hostpath-attacher-0" not found
	Error from server (NotFound): pods "csi-hostpath-resizer-0" not found
	Error from server (NotFound): pods "csi-hostpathplugin-vwjf7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-545460 describe pod busybox csi-hostpath-attacher-0 csi-hostpath-resizer-0 csi-hostpathplugin-vwjf7: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.45s)

                                                
                                    

Test pass (308/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.65
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 3.78
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 89.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 225.77
29 TestAddons/serial/Volcano 40.8
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.51
35 TestAddons/parallel/InspektorGadget 10.77
36 TestAddons/parallel/MetricsServer 5.74
38 TestAddons/parallel/CSI 60.75
39 TestAddons/parallel/Headlamp 19.75
40 TestAddons/parallel/CloudSpanner 6.49
41 TestAddons/parallel/LocalPath 12.07
42 TestAddons/parallel/NvidiaDevicePlugin 6.49
43 TestAddons/parallel/Yakd 11.73
44 TestAddons/StoppedEnableDisable 13.58
45 TestCertOptions 103.31
46 TestCertExpiration 338.07
47 TestDockerFlags 84.24
48 TestForceSystemdFlag 90.74
49 TestForceSystemdEnv 96.5
51 TestKVMDriverInstallOrUpdate 3.22
55 TestErrorSpam/setup 50.24
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.74
58 TestErrorSpam/pause 1.21
59 TestErrorSpam/unpause 1.4
60 TestErrorSpam/stop 15.76
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 66.92
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 42.1
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.5
72 TestFunctional/serial/CacheCmd/cache/add_local 0.98
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.17
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 43.56
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 0.99
83 TestFunctional/serial/LogsFileCmd 1
84 TestFunctional/serial/InvalidService 4.48
86 TestFunctional/parallel/ConfigCmd 0.35
87 TestFunctional/parallel/DashboardCmd 30.14
88 TestFunctional/parallel/DryRun 0.29
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.01
94 TestFunctional/parallel/ServiceCmdConnect 12.55
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 47.56
98 TestFunctional/parallel/SSHCmd 0.43
99 TestFunctional/parallel/CpCmd 1.3
100 TestFunctional/parallel/MySQL 31.67
101 TestFunctional/parallel/FileSync 0.25
102 TestFunctional/parallel/CertSync 1.32
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
110 TestFunctional/parallel/License 0.22
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
122 TestFunctional/parallel/ProfileCmd/profile_list 0.31
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
124 TestFunctional/parallel/MountCmd/any-port 8.73
125 TestFunctional/parallel/MountCmd/specific-port 1.67
126 TestFunctional/parallel/ServiceCmd/List 0.3
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
130 TestFunctional/parallel/ServiceCmd/Format 0.34
131 TestFunctional/parallel/ServiceCmd/URL 0.38
132 TestFunctional/parallel/DockerEnv/bash 1.16
133 TestFunctional/parallel/Version/short 0.05
134 TestFunctional/parallel/Version/components 0.56
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
142 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
143 TestFunctional/parallel/ImageCommands/Setup 0.96
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.8
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
154 TestGvisorAddon 257.81
157 TestMultiControlPlane/serial/StartCluster 216.71
158 TestMultiControlPlane/serial/DeployApp 5.82
159 TestMultiControlPlane/serial/PingHostFromPods 1.26
160 TestMultiControlPlane/serial/AddWorkerNode 63.94
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
163 TestMultiControlPlane/serial/CopyFile 12.7
164 TestMultiControlPlane/serial/StopSecondaryNode 13.94
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
166 TestMultiControlPlane/serial/RestartSecondaryNode 44.28
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 301.63
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.15
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
171 TestMultiControlPlane/serial/StopCluster 37.69
172 TestMultiControlPlane/serial/RestartCluster 163.91
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
174 TestMultiControlPlane/serial/AddSecondaryNode 85.07
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
178 TestImageBuild/serial/Setup 51.42
179 TestImageBuild/serial/NormalBuild 1.47
180 TestImageBuild/serial/BuildWithBuildArg 0.93
181 TestImageBuild/serial/BuildWithDockerIgnore 0.63
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.65
186 TestJSONOutput/start/Command 89.4
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.56
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.54
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 7.55
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.2
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 102.73
218 TestMountStart/serial/StartWithMountFirst 31.19
219 TestMountStart/serial/VerifyMountFirst 0.37
220 TestMountStart/serial/StartWithMountSecond 31.3
221 TestMountStart/serial/VerifyMountSecond 0.36
222 TestMountStart/serial/DeleteFirst 0.68
223 TestMountStart/serial/VerifyMountPostDelete 0.36
224 TestMountStart/serial/Stop 2.27
225 TestMountStart/serial/RestartStopped 26.49
226 TestMountStart/serial/VerifyMountPostStop 0.36
229 TestMultiNode/serial/FreshStart2Nodes 129.48
230 TestMultiNode/serial/DeployApp2Nodes 3.35
231 TestMultiNode/serial/PingHostFrom2Pods 0.81
232 TestMultiNode/serial/AddNode 59.12
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.57
235 TestMultiNode/serial/CopyFile 7.04
236 TestMultiNode/serial/StopNode 3.33
237 TestMultiNode/serial/StartAfterStop 42.1
238 TestMultiNode/serial/RestartKeepsNodes 193.68
239 TestMultiNode/serial/DeleteNode 2.06
240 TestMultiNode/serial/StopMultiNode 25.85
241 TestMultiNode/serial/RestartMultiNode 179.1
242 TestMultiNode/serial/ValidateNameConflict 51.76
247 TestPreload 156.68
249 TestScheduledStopUnix 121.66
250 TestSkaffold 124.67
253 TestRunningBinaryUpgrade 203.94
255 TestKubernetesUpgrade 187.03
276 TestPause/serial/Start 141.47
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
279 TestNoKubernetes/serial/StartWithK8s 84.45
280 TestPause/serial/SecondStartNoReconfiguration 48.55
281 TestNoKubernetes/serial/StartWithStopK8s 17.87
282 TestStoppedBinaryUpgrade/Setup 0.47
283 TestStoppedBinaryUpgrade/Upgrade 174.32
284 TestNoKubernetes/serial/Start 48.26
285 TestPause/serial/Pause 0.67
286 TestPause/serial/VerifyStatus 0.27
287 TestPause/serial/Unpause 0.58
288 TestPause/serial/PauseAgain 0.78
289 TestPause/serial/DeletePaused 0.8
290 TestPause/serial/VerifyDeletedResources 4.13
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
292 TestNoKubernetes/serial/ProfileList 1.21
293 TestNoKubernetes/serial/Stop 2.33
294 TestNoKubernetes/serial/StartNoArgs 71.79
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
296 TestStoppedBinaryUpgrade/MinikubeLogs 1.81
297 TestNetworkPlugins/group/auto/Start 78.28
298 TestNetworkPlugins/group/flannel/Start 99.98
299 TestNetworkPlugins/group/enable-default-cni/Start 122.85
300 TestNetworkPlugins/group/auto/KubeletFlags 0.29
301 TestNetworkPlugins/group/auto/NetCatPod 12.82
302 TestNetworkPlugins/group/auto/DNS 0.16
303 TestNetworkPlugins/group/auto/Localhost 0.13
304 TestNetworkPlugins/group/auto/HairPin 0.15
305 TestNetworkPlugins/group/bridge/Start 97.75
306 TestNetworkPlugins/group/flannel/ControllerPod 6.01
307 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
308 TestNetworkPlugins/group/flannel/NetCatPod 12.25
309 TestNetworkPlugins/group/flannel/DNS 0.18
310 TestNetworkPlugins/group/flannel/Localhost 0.16
311 TestNetworkPlugins/group/flannel/HairPin 0.16
312 TestNetworkPlugins/group/kubenet/Start 63.88
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
318 TestNetworkPlugins/group/calico/Start 91.27
319 TestNetworkPlugins/group/kindnet/Start 98.39
320 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
321 TestNetworkPlugins/group/bridge/NetCatPod 12.26
322 TestNetworkPlugins/group/kubenet/KubeletFlags 0.2
323 TestNetworkPlugins/group/kubenet/NetCatPod 10.2
324 TestNetworkPlugins/group/bridge/DNS 0.17
325 TestNetworkPlugins/group/bridge/Localhost 0.14
326 TestNetworkPlugins/group/bridge/HairPin 0.14
327 TestNetworkPlugins/group/kubenet/DNS 0.19
328 TestNetworkPlugins/group/kubenet/Localhost 0.14
329 TestNetworkPlugins/group/kubenet/HairPin 0.15
330 TestNetworkPlugins/group/custom-flannel/Start 83.85
331 TestNetworkPlugins/group/false/Start 102.51
332 TestNetworkPlugins/group/calico/ControllerPod 6.01
333 TestNetworkPlugins/group/calico/KubeletFlags 0.21
334 TestNetworkPlugins/group/calico/NetCatPod 10.22
335 TestNetworkPlugins/group/calico/DNS 0.17
336 TestNetworkPlugins/group/calico/Localhost 0.13
337 TestNetworkPlugins/group/calico/HairPin 0.14
338 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
340 TestNetworkPlugins/group/kindnet/NetCatPod 13.32
341 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
342 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
344 TestStartStop/group/old-k8s-version/serial/FirstStart 171.39
345 TestNetworkPlugins/group/kindnet/DNS 0.2
346 TestNetworkPlugins/group/kindnet/Localhost 0.18
347 TestNetworkPlugins/group/kindnet/HairPin 0.19
348 TestNetworkPlugins/group/custom-flannel/DNS 0.2
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
352 TestStartStop/group/no-preload/serial/FirstStart 82.35
353 TestNetworkPlugins/group/false/KubeletFlags 0.21
354 TestNetworkPlugins/group/false/NetCatPod 11.22
356 TestStartStop/group/embed-certs/serial/FirstStart 96.67
357 TestNetworkPlugins/group/false/DNS 0.2
358 TestNetworkPlugins/group/false/Localhost 0.17
359 TestNetworkPlugins/group/false/HairPin 0.18
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 137.02
362 TestStartStop/group/no-preload/serial/DeployApp 8.33
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
364 TestStartStop/group/no-preload/serial/Stop 13.37
365 TestStartStop/group/embed-certs/serial/DeployApp 8.39
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
367 TestStartStop/group/no-preload/serial/SecondStart 324.49
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
369 TestStartStop/group/embed-certs/serial/Stop 13.35
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/embed-certs/serial/SecondStart 309.92
372 TestStartStop/group/old-k8s-version/serial/DeployApp 8.56
373 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
374 TestStartStop/group/old-k8s-version/serial/Stop 13.34
375 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
376 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
377 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
378 TestStartStop/group/old-k8s-version/serial/SecondStart 398.98
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.35
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 306.82
382 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.01
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
386 TestStartStop/group/no-preload/serial/Pause 2.46
387 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/newest-cni/serial/FirstStart 61.38
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
391 TestStartStop/group/embed-certs/serial/Pause 2.59
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
394 TestStartStop/group/newest-cni/serial/DeployApp 0
395 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.5
398 TestStartStop/group/newest-cni/serial/Stop 7.87
399 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
400 TestStartStop/group/newest-cni/serial/SecondStart 38.43
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
404 TestStartStop/group/newest-cni/serial/Pause 2.48
405 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
408 TestStartStop/group/old-k8s-version/serial/Pause 2.29
x
+
TestDownloadOnly/v1.20.0/json-events (6.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-462178 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-462178 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (6.653179991s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 17:48:02.502058   83346 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 17:48:02.502164   83346 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-462178
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-462178: exit status 85 (57.069112ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-462178 | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC |          |
	|         | -p download-only-462178        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:47:55
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:47:55.887781   83358 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:47:55.887881   83358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:47:55.887888   83358 out.go:358] Setting ErrFile to fd 2...
	I0920 17:47:55.887893   83358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:47:55.888066   83358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	W0920 17:47:55.888172   83358 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-76160/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-76160/.minikube/config/config.json: no such file or directory
	I0920 17:47:55.888755   83358 out.go:352] Setting JSON to true
	I0920 17:47:55.889673   83358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":5427,"bootTime":1726849049,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:47:55.889774   83358 start.go:139] virtualization: kvm guest
	I0920 17:47:55.892252   83358 out.go:97] [download-only-462178] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:47:55.892397   83358 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:47:55.892449   83358 notify.go:220] Checking for updates...
	I0920 17:47:55.893938   83358 out.go:169] MINIKUBE_LOCATION=19678
	I0920 17:47:55.895292   83358 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:47:55.896661   83358 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	I0920 17:47:55.897969   83358 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 17:47:55.899299   83358 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 17:47:55.901754   83358 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:47:55.901967   83358 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:47:55.938968   83358 out.go:97] Using the kvm2 driver based on user configuration
	I0920 17:47:55.938997   83358 start.go:297] selected driver: kvm2
	I0920 17:47:55.939003   83358 start.go:901] validating driver "kvm2" against <nil>
	I0920 17:47:55.939323   83358 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:47:55.939415   83358 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19678-76160/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 17:47:55.954415   83358 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 17:47:55.954464   83358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:47:55.955019   83358 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 17:47:55.955177   83358 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:47:55.955203   83358 cni.go:84] Creating CNI manager for ""
	I0920 17:47:55.955276   83358 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 17:47:55.955341   83358 start.go:340] cluster config:
	{Name:download-only-462178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-462178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:47:55.955534   83358 iso.go:125] acquiring lock: {Name:mk2228d1b417575d45b5c1ebe8ab98349c7e233e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:47:55.957371   83358 out.go:97] Downloading VM boot image ...
	I0920 17:47:55.957498   83358 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19678-76160/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0920 17:47:58.614201   83358 out.go:97] Starting "download-only-462178" primary control-plane node in "download-only-462178" cluster
	I0920 17:47:58.614234   83358 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:47:58.640887   83358 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 17:47:58.640917   83358 cache.go:56] Caching tarball of preloaded images
	I0920 17:47:58.641079   83358 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:47:58.642864   83358 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 17:47:58.642885   83358 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 17:47:58.677122   83358 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-462178 host does not exist
	  To start a cluster, run: "minikube start -p download-only-462178"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-462178
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-182067 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-182067 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (3.778955668s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 17:48:06.586476   83346 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:48:06.586518   83346 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-76160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-182067
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-182067: exit status 85 (56.524713ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-462178 | jenkins | v1.34.0 | 20 Sep 24 17:47 UTC |                     |
	|         | -p download-only-462178        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p download-only-462178        | download-only-462178 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| start   | -o=json --download-only        | download-only-182067 | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | -p download-only-182067        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:48:02
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:48:02.845860   83550 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:48:02.846002   83550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:02.846014   83550 out.go:358] Setting ErrFile to fd 2...
	I0920 17:48:02.846021   83550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:02.846210   83550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 17:48:02.846750   83550 out.go:352] Setting JSON to true
	I0920 17:48:02.847701   83550 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":5434,"bootTime":1726849049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:48:02.847797   83550 start.go:139] virtualization: kvm guest
	I0920 17:48:02.849726   83550 out.go:97] [download-only-182067] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:48:02.849855   83550 notify.go:220] Checking for updates...
	I0920 17:48:02.851297   83550 out.go:169] MINIKUBE_LOCATION=19678
	I0920 17:48:02.852871   83550 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:48:02.854256   83550 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	I0920 17:48:02.855573   83550 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 17:48:02.857137   83550 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-182067 host does not exist
	  To start a cluster, run: "minikube start -p download-only-182067"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-182067
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:48:07.151998   83346 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-504558 --alsologtostderr --binary-mirror http://127.0.0.1:39541 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-504558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-504558
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (89.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-217101 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-217101 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m28.186523132s)
helpers_test.go:175: Cleaning up "offline-docker-217101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-217101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-217101: (1.598596935s)
--- PASS: TestOffline (89.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-545460
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-545460: exit status 85 (48.704498ms)

                                                
                                                
-- stdout --
	* Profile "addons-545460" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545460"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-545460
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-545460: exit status 85 (49.187505ms)

                                                
                                                
-- stdout --
	* Profile "addons-545460" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545460"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (225.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-545460 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-545460 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns: (3m45.771125504s)
--- PASS: TestAddons/Setup (225.77s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 26.473568ms
addons_test.go:835: volcano-scheduler stabilized in 26.629607ms
addons_test.go:851: volcano-controller stabilized in 28.965059ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-2jbbz" [dc97e793-3319-4b14-8b53-91fac6f772f5] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003870646s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-s68fj" [6fbe2de0-508e-4712-bd89-2f963f1b8715] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00535357s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-d2qr9" [a8d48751-cc4f-4591-8b8b-6bf821bcc61c] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003622524s
addons_test.go:870: (dbg) Run:  kubectl --context addons-545460 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-545460 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-545460 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [aece1ddb-b6a7-4528-8946-44491e3ba3e0] Pending
helpers_test.go:344: "test-job-nginx-0" [aece1ddb-b6a7-4528-8946-44491e3ba3e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [aece1ddb-b6a7-4528-8946-44491e3ba3e0] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004635043s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 addons disable volcano --alsologtostderr -v=1: (10.398746361s)
--- PASS: TestAddons/serial/Volcano (40.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-545460 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-545460 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-545460 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-545460 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-545460 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fadb4dc1-1ae6-4b4c-9675-ef92f5f6745e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fadb4dc1-1ae6-4b4c-9675-ef92f5f6745e] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004257061s
I0920 18:01:12.120317   83346 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-545460 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.174
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 addons disable ingress-dns --alsologtostderr -v=1: (1.55109739s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 addons disable ingress --alsologtostderr -v=1: (7.690739497s)
--- PASS: TestAddons/parallel/Ingress (21.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mqjc9" [3c2493d2-4772-473e-a8f9-38293c88f8a5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005418077s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-545460
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-545460: (5.762850124s)
--- PASS: TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.504405ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-klw4f" [797d18c7-0814-4734-bef9-1573075bab38] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003917311s
addons_test.go:413: (dbg) Run:  kubectl --context addons-545460 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 18:00:55.593254   83346 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:00:55.597955   83346 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:00:55.597980   83346 kapi.go:107] duration metric: took 4.750606ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.759404ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-545460 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-545460 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [07b150a3-d1d9-4968-9751-a936d64c362f] Pending
helpers_test.go:344: "task-pv-pod" [07b150a3-d1d9-4968-9751-a936d64c362f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [07b150a3-d1d9-4968-9751-a936d64c362f] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005416666s
addons_test.go:528: (dbg) Run:  kubectl --context addons-545460 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-545460 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-545460 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-545460 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-545460 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-545460 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-545460 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [29d60276-625d-4b4b-9124-f3ff4ff5fe4b] Pending
helpers_test.go:344: "task-pv-pod-restore" [29d60276-625d-4b4b-9124-f3ff4ff5fe4b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [29d60276-625d-4b4b-9124-f3ff4ff5fe4b] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003953022s
addons_test.go:570: (dbg) Run:  kubectl --context addons-545460 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-545460 delete pod task-pv-pod-restore: (1.000989752s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-545460 delete pvc hpvc-restore
2024/09/20 18:01:48 [DEBUG] GET http://192.168.39.174:5000
addons_test.go:578: (dbg) Run:  kubectl --context addons-545460 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.103732379s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-545460 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-545460 --alsologtostderr -v=1: (1.033469782s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-jfnsc" [645065a9-d4cf-43c4-b6bb-b1da6b4d461c] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-jfnsc" [645065a9-d4cf-43c4-b6bb-b1da6b4d461c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-jfnsc" [645065a9-d4cf-43c4-b6bb-b1da6b4d461c] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.008951058s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 addons disable headlamp --alsologtostderr -v=1: (5.704218852s)
--- PASS: TestAddons/parallel/Headlamp (19.75s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-5x2fl" [d2c42a85-4c30-4a7d-a944-ee30800cc31a] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005918766s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-545460
--- PASS: TestAddons/parallel/CloudSpanner (6.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-545460 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-545460 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [23f4a451-4db6-4d6c-9154-e4c37e6c470a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [23f4a451-4db6-4d6c-9154-e4c37e6c470a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [23f4a451-4db6-4d6c-9154-e4c37e6c470a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004179832s
addons_test.go:938: (dbg) Run:  kubectl --context addons-545460 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 ssh "cat /opt/local-path-provisioner/pvc-e568ad18-8c41-4cde-a7ed-cb91652aed7c_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-545460 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-545460 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vk68k" [df7a9429-e3d3-4ec6-8ae2-ff9a97192f55] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005617352s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-545460
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rhftx" [fab9c410-72c3-4146-90b5-0fe89e55a24f] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003856484s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-545460 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-545460 addons disable yakd --alsologtostderr -v=1: (5.729366614s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-545460
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-545460: (13.315152126s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-545460
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-545460
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-545460
--- PASS: TestAddons/StoppedEnableDisable (13.58s)

                                                
                                    
x
+
TestCertOptions (103.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-756036 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-756036 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m41.697449112s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-756036 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-756036 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-756036 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-756036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-756036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-756036: (1.125561599s)
--- PASS: TestCertOptions (103.31s)

                                                
                                    
x
+
TestCertExpiration (338.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-153597 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-153597 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m40.999000432s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-153597 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0920 18:57:26.304666   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-153597 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (55.887077506s)
helpers_test.go:175: Cleaning up "cert-expiration-153597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-153597
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-153597: (1.182647378s)
--- PASS: TestCertExpiration (338.07s)

                                                
                                    
x
+
TestDockerFlags (84.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-389068 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0920 18:53:20.540690   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-389068 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m22.521663071s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-389068 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-389068 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-389068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-389068
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-389068: (1.164656836s)
--- PASS: TestDockerFlags (84.24s)

                                                
                                    
x
+
TestForceSystemdFlag (90.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-106024 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-106024 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m29.398863943s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-106024 ssh "docker info --format {{.CgroupDriver}}"
E0920 18:52:01.172594   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:175: Cleaning up "force-systemd-flag-106024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-106024
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-106024: (1.06047065s)
--- PASS: TestForceSystemdFlag (90.74s)

                                                
                                    
x
+
TestForceSystemdEnv (96.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-085605 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E0920 18:52:08.855991   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:52:19.097514   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-085605 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m35.166688997s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-085605 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-085605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-085605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-085605: (1.082157806s)
--- PASS: TestForceSystemdEnv (96.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 18:52:02.482078   83346 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:52:02.482210   83346 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 18:52:02.517936   83346 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 18:52:02.518339   83346 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 18:52:02.518433   83346 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1968682329/001/docker-machine-driver-kvm2
I0920 18:52:02.763893   83346 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1968682329/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00012fe10 gz:0xc00012fe18 tar:0xc00012fd60 tar.bz2:0xc00012fdb0 tar.gz:0xc00012fdc0 tar.xz:0xc00012fde0 tar.zst:0xc00012fdf0 tbz2:0xc00012fdb0 tgz:0xc00012fdc0 txz:0xc00012fde0 tzst:0xc00012fdf0 xz:0xc00012fe20 zip:0xc00012fe30 zst:0xc00012fe28] Getters:map[file:0xc0005e2300 http:0xc000744550 https:0xc0007445a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 18:52:02.763949   83346 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1968682329/001/docker-machine-driver-kvm2
E0920 18:52:03.734197   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
I0920 18:52:04.234081   83346 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:52:04.234173   83346 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 18:52:04.266896   83346 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 18:52:04.266935   83346 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 18:52:04.267010   83346 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 18:52:04.267040   83346 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1968682329/002/docker-machine-driver-kvm2
I0920 18:52:04.428516   83346 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1968682329/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00012fe10 gz:0xc00012fe18 tar:0xc00012fd60 tar.bz2:0xc00012fdb0 tar.gz:0xc00012fdc0 tar.xz:0xc00012fde0 tar.zst:0xc00012fdf0 tbz2:0xc00012fdb0 tgz:0xc00012fdc0 txz:0xc00012fde0 tzst:0xc00012fdf0 xz:0xc00012fe20 zip:0xc00012fe30 zst:0xc00012fe28] Getters:map[file:0xc0005e3a60 http:0xc000745a90 https:0xc000745ae0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 18:52:04.428573   83346 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1968682329/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                    
x
+
TestErrorSpam/setup (50.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-874785 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-874785 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-874785 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-874785 --driver=kvm2 : (50.243118254s)
--- PASS: TestErrorSpam/setup (50.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (15.76s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 stop: (12.53506829s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 stop: (1.161470965s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-874785 --log_dir /tmp/nospam-874785 stop: (2.06492368s)
--- PASS: TestErrorSpam/stop (15.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-76160/.minikube/files/etc/test/nested/copy/83346/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698761 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-698761 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m6.917841949s)
--- PASS: TestFunctional/serial/StartWithProxy (66.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:04:27.739691   83346 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698761 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-698761 --alsologtostderr -v=8: (42.097566935s)
functional_test.go:663: soft start took 42.098384306s for "functional-698761" cluster.
I0920 18:05:09.837641   83346 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (42.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-698761 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-698761 /tmp/TestFunctionalserialCacheCmdcacheadd_local4006484536/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cache add minikube-local-cache-test:functional-698761
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cache delete minikube-local-cache-test:functional-698761
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-698761
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.351055ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 kubectl -- --context functional-698761 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-698761 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698761 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-698761 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.555262798s)
functional_test.go:761: restart took 43.555402835s for "functional-698761" cluster.
I0920 18:05:58.752700   83346 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (43.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-698761 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 logs
--- PASS: TestFunctional/serial/LogsCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 logs --file /tmp/TestFunctionalserialLogsFileCmd1972617760/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-698761 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-698761
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-698761: exit status 115 (273.323919ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.147:31268 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-698761 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-698761 delete -f testdata/invalidsvc.yaml: (1.007779535s)
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 config get cpus: exit status 14 (56.369836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 config get cpus: exit status 14 (55.356081ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-698761 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-698761 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 93361: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698761 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-698761 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (152.40294ms)

                                                
                                                
-- stdout --
	* [functional-698761] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:06:19.589376   93054 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:06:19.589692   93054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:06:19.589703   93054 out.go:358] Setting ErrFile to fd 2...
	I0920 18:06:19.589707   93054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:06:19.589911   93054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 18:06:19.590424   93054 out.go:352] Setting JSON to false
	I0920 18:06:19.591466   93054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6531,"bootTime":1726849049,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:06:19.591529   93054 start.go:139] virtualization: kvm guest
	I0920 18:06:19.593677   93054 out.go:177] * [functional-698761] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:06:19.595058   93054 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:06:19.595106   93054 notify.go:220] Checking for updates...
	I0920 18:06:19.597432   93054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:06:19.598941   93054 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	I0920 18:06:19.600187   93054 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 18:06:19.601605   93054 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:06:19.602931   93054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:06:19.604723   93054 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:06:19.605407   93054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:06:19.605470   93054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:06:19.620861   93054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0920 18:06:19.621309   93054 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:06:19.621891   93054 main.go:141] libmachine: Using API Version  1
	I0920 18:06:19.621914   93054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:06:19.622321   93054 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:06:19.622511   93054 main.go:141] libmachine: (functional-698761) Calling .DriverName
	I0920 18:06:19.622778   93054 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:06:19.623109   93054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:06:19.623154   93054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:06:19.642308   93054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44477
	I0920 18:06:19.642806   93054 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:06:19.643397   93054 main.go:141] libmachine: Using API Version  1
	I0920 18:06:19.643419   93054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:06:19.643771   93054 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:06:19.643976   93054 main.go:141] libmachine: (functional-698761) Calling .DriverName
	I0920 18:06:19.687698   93054 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 18:06:19.688998   93054 start.go:297] selected driver: kvm2
	I0920 18:06:19.689015   93054 start.go:901] validating driver "kvm2" against &{Name:functional-698761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-698761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:06:19.689132   93054 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:06:19.690985   93054 out.go:201] 
	W0920 18:06:19.692495   93054 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:06:19.693721   93054 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698761 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698761 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-698761 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (153.364224ms)

                                                
                                                
-- stdout --
	* [functional-698761] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:06:19.520775   93036 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:06:19.520917   93036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:06:19.520928   93036 out.go:358] Setting ErrFile to fd 2...
	I0920 18:06:19.520935   93036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:06:19.521316   93036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 18:06:19.522000   93036 out.go:352] Setting JSON to false
	I0920 18:06:19.523293   93036 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":6530,"bootTime":1726849049,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:06:19.523412   93036 start.go:139] virtualization: kvm guest
	I0920 18:06:19.525680   93036 out.go:177] * [functional-698761] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 18:06:19.527261   93036 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:06:19.527353   93036 notify.go:220] Checking for updates...
	I0920 18:06:19.529916   93036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:06:19.531315   93036 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	I0920 18:06:19.532639   93036 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	I0920 18:06:19.533947   93036 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:06:19.535349   93036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:06:19.537157   93036 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:06:19.537558   93036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:06:19.537617   93036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:06:19.558476   93036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I0920 18:06:19.559186   93036 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:06:19.559806   93036 main.go:141] libmachine: Using API Version  1
	I0920 18:06:19.559833   93036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:06:19.560197   93036 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:06:19.560460   93036 main.go:141] libmachine: (functional-698761) Calling .DriverName
	I0920 18:06:19.560736   93036 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:06:19.561224   93036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:06:19.561281   93036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:06:19.579447   93036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36953
	I0920 18:06:19.579857   93036 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:06:19.580394   93036 main.go:141] libmachine: Using API Version  1
	I0920 18:06:19.580419   93036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:06:19.580785   93036 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:06:19.580991   93036 main.go:141] libmachine: (functional-698761) Calling .DriverName
	I0920 18:06:19.616846   93036 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 18:06:19.617855   93036 start.go:297] selected driver: kvm2
	I0920 18:06:19.617872   93036 start.go:901] validating driver "kvm2" against &{Name:functional-698761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-698761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:06:19.617981   93036 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:06:19.620210   93036 out.go:201] 
	W0920 18:06:19.621363   93036 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:06:19.622583   93036 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-698761 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-698761 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-lh99w" [5ba6618e-677e-4d15-a612-955ad0df7c21] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-lh99w" [5ba6618e-677e-4d15-a612-955ad0df7c21] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.00430651s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.147:31638
functional_test.go:1675: http://192.168.39.147:31638: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-lh99w

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.147:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.147:31638
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [65100433-29dd-4487-ba72-cc920dcc052f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005050827s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-698761 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-698761 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-698761 get pvc myclaim -o=json
I0920 18:06:11.984292   83346 retry.go:31] will retry after 1.564605337s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:57a108ed-8e04-4ab0-904a-6b6993cc1f4f ResourceVersion:723 Generation:0 CreationTimestamp:2024-09-20 18:06:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a62990 VolumeMode:0xc001a629a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-698761 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-698761 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b3025d7d-d404-4ab7-b06e-d30162f69188] Pending
helpers_test.go:344: "sp-pod" [b3025d7d-d404-4ab7-b06e-d30162f69188] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b3025d7d-d404-4ab7-b06e-d30162f69188] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004911264s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-698761 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-698761 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-698761 delete -f testdata/storage-provisioner/pod.yaml: (2.070640496s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-698761 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b75d0e89-37c8-4709-9a5f-a1dbf306ff40] Pending
helpers_test.go:344: "sp-pod" [b75d0e89-37c8-4709-9a5f-a1dbf306ff40] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b75d0e89-37c8-4709-9a5f-a1dbf306ff40] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004089963s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-698761 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh -n functional-698761 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cp functional-698761:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3588479368/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh -n functional-698761 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh -n functional-698761 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-698761 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-bplkt" [449f00f8-5328-4555-acd1-01e3c6236fda] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-bplkt" [449f00f8-5328-4555-acd1-01e3c6236fda] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.005174103s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;": exit status 1 (423.810029ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:06:45.497746   83346 retry.go:31] will retry after 1.307681315s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;": exit status 1 (169.269007ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:06:46.975652   83346 retry.go:31] will retry after 1.941345727s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;": exit status 1 (161.158227ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:06:49.078774   83346 retry.go:31] will retry after 2.291117383s: exit status 1
2024/09/20 18:06:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1807: (dbg) Run:  kubectl --context functional-698761 exec mysql-6cdb49bbb-bplkt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/83346/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /etc/test/nested/copy/83346/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/83346.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /etc/ssl/certs/83346.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/83346.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /usr/share/ca-certificates/83346.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/833462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /etc/ssl/certs/833462.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/833462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /usr/share/ca-certificates/833462.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-698761 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh "sudo systemctl is-active crio": exit status 1 (236.409738ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-698761 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-698761 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-tthm6" [fac57859-2274-46b7-b20a-8b16b6dc7e8d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-tthm6" [fac57859-2274-46b7-b20a-8b16b6dc7e8d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004889871s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "261.201856ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.341123ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "274.110406ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.315452ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdany-port2142622259/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726855567580016681" to /tmp/TestFunctionalparallelMountCmdany-port2142622259/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726855567580016681" to /tmp/TestFunctionalparallelMountCmdany-port2142622259/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726855567580016681" to /tmp/TestFunctionalparallelMountCmdany-port2142622259/001/test-1726855567580016681
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.27311ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:06:07.785608   83346 retry.go:31] will retry after 714.462938ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 18:06 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 18:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 18:06 test-1726855567580016681
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh cat /mount-9p/test-1726855567580016681
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-698761 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [eeb2a117-310f-4088-84ad-28b05e727b56] Pending
helpers_test.go:344: "busybox-mount" [eeb2a117-310f-4088-84ad-28b05e727b56] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [eeb2a117-310f-4088-84ad-28b05e727b56] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [eeb2a117-310f-4088-84ad-28b05e727b56] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003625163s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-698761 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdany-port2142622259/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdspecific-port2473447206/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.333347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:06:16.525736   83346 retry.go:31] will retry after 448.977047ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdspecific-port2473447206/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh "sudo umount -f /mount-9p": exit status 1 (212.539732ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-698761 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdspecific-port2473447206/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3443108997/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3443108997/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3443108997/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T" /mount1: exit status 1 (315.42073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:06:18.296180   83346 retry.go:31] will retry after 382.071563ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-698761 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3443108997/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3443108997/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698761 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3443108997/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 service list -o json
functional_test.go:1494: Took "435.448441ms" to run "out/minikube-linux-amd64 -p functional-698761 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.147:32167
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.147:32167
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-698761 docker-env) && out/minikube-linux-amd64 status -p functional-698761"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-698761 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698761 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-698761
docker.io/kicbase/echo-server:functional-698761
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698761 image ls --format short --alsologtostderr:
I0920 18:06:28.720023   93841 out.go:345] Setting OutFile to fd 1 ...
I0920 18:06:28.720116   93841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:28.720121   93841 out.go:358] Setting ErrFile to fd 2...
I0920 18:06:28.720125   93841 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:28.720327   93841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
I0920 18:06:28.721130   93841 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:28.721268   93841 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:28.721713   93841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:28.721755   93841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:28.738064   93841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
I0920 18:06:28.738589   93841 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:28.739182   93841 main.go:141] libmachine: Using API Version  1
I0920 18:06:28.739225   93841 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:28.739604   93841 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:28.739837   93841 main.go:141] libmachine: (functional-698761) Calling .GetState
I0920 18:06:28.741909   93841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:28.741961   93841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:28.756852   93841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
I0920 18:06:28.757364   93841 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:28.757929   93841 main.go:141] libmachine: Using API Version  1
I0920 18:06:28.757949   93841 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:28.758282   93841 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:28.758644   93841 main.go:141] libmachine: (functional-698761) Calling .DriverName
I0920 18:06:28.758917   93841 ssh_runner.go:195] Run: systemctl --version
I0920 18:06:28.758966   93841 main.go:141] libmachine: (functional-698761) Calling .GetSSHHostname
I0920 18:06:28.762037   93841 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:28.762467   93841 main.go:141] libmachine: (functional-698761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:4a:d5", ip: ""} in network mk-functional-698761: {Iface:virbr1 ExpiryTime:2024-09-20 19:03:35 +0000 UTC Type:0 Mac:52:54:00:40:4a:d5 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:functional-698761 Clientid:01:52:54:00:40:4a:d5}
I0920 18:06:28.762574   93841 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined IP address 192.168.39.147 and MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:28.762627   93841 main.go:141] libmachine: (functional-698761) Calling .GetSSHPort
I0920 18:06:28.762809   93841 main.go:141] libmachine: (functional-698761) Calling .GetSSHKeyPath
I0920 18:06:28.762993   93841 main.go:141] libmachine: (functional-698761) Calling .GetSSHUsername
I0920 18:06:28.763163   93841 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/functional-698761/id_rsa Username:docker}
I0920 18:06:28.861767   93841 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 18:06:28.890739   93841 main.go:141] libmachine: Making call to close driver server
I0920 18:06:28.890768   93841 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:28.891037   93841 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:28.891056   93841 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:06:28.891079   93841 main.go:141] libmachine: Making call to close driver server
I0920 18:06:28.891083   93841 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:28.891089   93841 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:28.891409   93841 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:28.891428   93841 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:28.891445   93841 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698761 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/kicbase/echo-server               | functional-698761 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/minikube-local-cache-test | functional-698761 | 0e80d32bf51f7 | 30B    |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| localhost/my-image                          | functional-698761 | e714644bf7473 | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698761 image ls --format table --alsologtostderr:
I0920 18:06:32.150621   94034 out.go:345] Setting OutFile to fd 1 ...
I0920 18:06:32.150722   94034 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:32.150729   94034 out.go:358] Setting ErrFile to fd 2...
I0920 18:06:32.150734   94034 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:32.150964   94034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
I0920 18:06:32.151603   94034 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:32.151736   94034 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:32.152300   94034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:32.152355   94034 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:32.166822   94034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38581
I0920 18:06:32.167271   94034 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:32.167815   94034 main.go:141] libmachine: Using API Version  1
I0920 18:06:32.167838   94034 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:32.168258   94034 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:32.168468   94034 main.go:141] libmachine: (functional-698761) Calling .GetState
I0920 18:06:32.170343   94034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:32.170383   94034 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:32.184310   94034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
I0920 18:06:32.184745   94034 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:32.185247   94034 main.go:141] libmachine: Using API Version  1
I0920 18:06:32.185271   94034 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:32.185709   94034 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:32.185923   94034 main.go:141] libmachine: (functional-698761) Calling .DriverName
I0920 18:06:32.186108   94034 ssh_runner.go:195] Run: systemctl --version
I0920 18:06:32.186146   94034 main.go:141] libmachine: (functional-698761) Calling .GetSSHHostname
I0920 18:06:32.188826   94034 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:32.189248   94034 main.go:141] libmachine: (functional-698761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:4a:d5", ip: ""} in network mk-functional-698761: {Iface:virbr1 ExpiryTime:2024-09-20 19:03:35 +0000 UTC Type:0 Mac:52:54:00:40:4a:d5 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:functional-698761 Clientid:01:52:54:00:40:4a:d5}
I0920 18:06:32.189283   94034 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined IP address 192.168.39.147 and MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:32.189403   94034 main.go:141] libmachine: (functional-698761) Calling .GetSSHPort
I0920 18:06:32.189560   94034 main.go:141] libmachine: (functional-698761) Calling .GetSSHKeyPath
I0920 18:06:32.189809   94034 main.go:141] libmachine: (functional-698761) Calling .GetSSHUsername
I0920 18:06:32.190014   94034 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/functional-698761/id_rsa Username:docker}
I0920 18:06:32.271964   94034 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 18:06:32.298902   94034 main.go:141] libmachine: Making call to close driver server
I0920 18:06:32.298918   94034 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:32.299227   94034 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:32.299255   94034 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:06:32.299264   94034 main.go:141] libmachine: Making call to close driver server
I0920 18:06:32.299272   94034 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:32.299554   94034 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:32.299570   94034 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:32.299583   94034 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698761 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0e80d32bf51f7cc765a92c4d6420fffdb275339317772b83533c04fd1c85aa7e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-698761"],"size":"30"},{"id":"175ffd71cce3d90bae95904b55260db9
41b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"e714644bf74739516831c77df0476e99c14882107d09abe4bbe5e42678c3c1c4","repoDigests":[],"repoTags":["localhost/my-image:functional-698761"],"size":"1240000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTa
gs":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-698761"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698761 image ls --format json --alsologtostderr:
I0920 18:06:31.926191   94011 out.go:345] Setting OutFile to fd 1 ...
I0920 18:06:31.926357   94011 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:31.926369   94011 out.go:358] Setting ErrFile to fd 2...
I0920 18:06:31.926376   94011 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:31.926652   94011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
I0920 18:06:31.927497   94011 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:31.927661   94011 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:31.928199   94011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:31.928257   94011 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:31.943185   94011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
I0920 18:06:31.943698   94011 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:31.944226   94011 main.go:141] libmachine: Using API Version  1
I0920 18:06:31.944249   94011 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:31.944652   94011 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:31.944968   94011 main.go:141] libmachine: (functional-698761) Calling .GetState
I0920 18:06:31.946660   94011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:31.946714   94011 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:31.961067   94011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
I0920 18:06:31.961601   94011 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:31.962084   94011 main.go:141] libmachine: Using API Version  1
I0920 18:06:31.962106   94011 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:31.962480   94011 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:31.962677   94011 main.go:141] libmachine: (functional-698761) Calling .DriverName
I0920 18:06:31.962891   94011 ssh_runner.go:195] Run: systemctl --version
I0920 18:06:31.962921   94011 main.go:141] libmachine: (functional-698761) Calling .GetSSHHostname
I0920 18:06:31.965821   94011 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:31.966233   94011 main.go:141] libmachine: (functional-698761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:4a:d5", ip: ""} in network mk-functional-698761: {Iface:virbr1 ExpiryTime:2024-09-20 19:03:35 +0000 UTC Type:0 Mac:52:54:00:40:4a:d5 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:functional-698761 Clientid:01:52:54:00:40:4a:d5}
I0920 18:06:31.966267   94011 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined IP address 192.168.39.147 and MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:31.966458   94011 main.go:141] libmachine: (functional-698761) Calling .GetSSHPort
I0920 18:06:31.966646   94011 main.go:141] libmachine: (functional-698761) Calling .GetSSHKeyPath
I0920 18:06:31.966854   94011 main.go:141] libmachine: (functional-698761) Calling .GetSSHUsername
I0920 18:06:31.967013   94011 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/functional-698761/id_rsa Username:docker}
I0920 18:06:32.053254   94011 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 18:06:32.096097   94011 main.go:141] libmachine: Making call to close driver server
I0920 18:06:32.096118   94011 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:32.096451   94011 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:32.096505   94011 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:06:32.096510   94011 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:32.096551   94011 main.go:141] libmachine: Making call to close driver server
I0920 18:06:32.096564   94011 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:32.096797   94011 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:32.096837   94011 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:32.096848   94011 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698761 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 0e80d32bf51f7cc765a92c4d6420fffdb275339317772b83533c04fd1c85aa7e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-698761
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-698761
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698761 image ls --format yaml --alsologtostderr:
I0920 18:06:28.947745   93875 out.go:345] Setting OutFile to fd 1 ...
I0920 18:06:28.947882   93875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:28.947892   93875 out.go:358] Setting ErrFile to fd 2...
I0920 18:06:28.947898   93875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:28.948165   93875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
I0920 18:06:28.949237   93875 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:28.949464   93875 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:28.950524   93875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:28.950586   93875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:28.966933   93875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45999
I0920 18:06:28.967398   93875 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:28.968018   93875 main.go:141] libmachine: Using API Version  1
I0920 18:06:28.968044   93875 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:28.968389   93875 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:28.968561   93875 main.go:141] libmachine: (functional-698761) Calling .GetState
I0920 18:06:28.970403   93875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:28.970461   93875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:28.987232   93875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
I0920 18:06:28.987765   93875 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:28.988358   93875 main.go:141] libmachine: Using API Version  1
I0920 18:06:28.988383   93875 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:28.988894   93875 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:28.989122   93875 main.go:141] libmachine: (functional-698761) Calling .DriverName
I0920 18:06:28.989339   93875 ssh_runner.go:195] Run: systemctl --version
I0920 18:06:28.989371   93875 main.go:141] libmachine: (functional-698761) Calling .GetSSHHostname
I0920 18:06:28.992462   93875 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:28.992925   93875 main.go:141] libmachine: (functional-698761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:4a:d5", ip: ""} in network mk-functional-698761: {Iface:virbr1 ExpiryTime:2024-09-20 19:03:35 +0000 UTC Type:0 Mac:52:54:00:40:4a:d5 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:functional-698761 Clientid:01:52:54:00:40:4a:d5}
I0920 18:06:28.992950   93875 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined IP address 192.168.39.147 and MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:28.993110   93875 main.go:141] libmachine: (functional-698761) Calling .GetSSHPort
I0920 18:06:28.993294   93875 main.go:141] libmachine: (functional-698761) Calling .GetSSHKeyPath
I0920 18:06:28.993461   93875 main.go:141] libmachine: (functional-698761) Calling .GetSSHUsername
I0920 18:06:28.993632   93875 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/functional-698761/id_rsa Username:docker}
I0920 18:06:29.088187   93875 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0920 18:06:29.119032   93875 main.go:141] libmachine: Making call to close driver server
I0920 18:06:29.119049   93875 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:29.119329   93875 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:29.119357   93875 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:06:29.119361   93875 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:29.119366   93875 main.go:141] libmachine: Making call to close driver server
I0920 18:06:29.119406   93875 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:29.119642   93875 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:29.119655   93875 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698761 ssh pgrep buildkitd: exit status 1 (249.628032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image build -t localhost/my-image:functional-698761 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-698761 image build -t localhost/my-image:functional-698761 testdata/build --alsologtostderr: (2.294150707s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698761 image build -t localhost/my-image:functional-698761 testdata/build --alsologtostderr:
I0920 18:06:29.423774   93937 out.go:345] Setting OutFile to fd 1 ...
I0920 18:06:29.424019   93937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:29.424029   93937 out.go:358] Setting ErrFile to fd 2...
I0920 18:06:29.424034   93937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:06:29.424198   93937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
I0920 18:06:29.424764   93937 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:29.425288   93937 config.go:182] Loaded profile config "functional-698761": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:06:29.425699   93937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:29.425743   93937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:29.440743   93937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
I0920 18:06:29.441269   93937 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:29.441859   93937 main.go:141] libmachine: Using API Version  1
I0920 18:06:29.441887   93937 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:29.442355   93937 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:29.442599   93937 main.go:141] libmachine: (functional-698761) Calling .GetState
I0920 18:06:29.444715   93937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0920 18:06:29.444763   93937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 18:06:29.459681   93937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
I0920 18:06:29.460129   93937 main.go:141] libmachine: () Calling .GetVersion
I0920 18:06:29.460593   93937 main.go:141] libmachine: Using API Version  1
I0920 18:06:29.460615   93937 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 18:06:29.461027   93937 main.go:141] libmachine: () Calling .GetMachineName
I0920 18:06:29.461255   93937 main.go:141] libmachine: (functional-698761) Calling .DriverName
I0920 18:06:29.461454   93937 ssh_runner.go:195] Run: systemctl --version
I0920 18:06:29.461494   93937 main.go:141] libmachine: (functional-698761) Calling .GetSSHHostname
I0920 18:06:29.464050   93937 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:29.464373   93937 main.go:141] libmachine: (functional-698761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:4a:d5", ip: ""} in network mk-functional-698761: {Iface:virbr1 ExpiryTime:2024-09-20 19:03:35 +0000 UTC Type:0 Mac:52:54:00:40:4a:d5 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:functional-698761 Clientid:01:52:54:00:40:4a:d5}
I0920 18:06:29.464396   93937 main.go:141] libmachine: (functional-698761) DBG | domain functional-698761 has defined IP address 192.168.39.147 and MAC address 52:54:00:40:4a:d5 in network mk-functional-698761
I0920 18:06:29.464570   93937 main.go:141] libmachine: (functional-698761) Calling .GetSSHPort
I0920 18:06:29.464739   93937 main.go:141] libmachine: (functional-698761) Calling .GetSSHKeyPath
I0920 18:06:29.464903   93937 main.go:141] libmachine: (functional-698761) Calling .GetSSHUsername
I0920 18:06:29.465075   93937 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/functional-698761/id_rsa Username:docker}
I0920 18:06:29.557484   93937 build_images.go:161] Building image from path: /tmp/build.1897042160.tar
I0920 18:06:29.557568   93937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 18:06:29.582043   93937 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1897042160.tar
I0920 18:06:29.594778   93937 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1897042160.tar: stat -c "%s %y" /var/lib/minikube/build/build.1897042160.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1897042160.tar': No such file or directory
I0920 18:06:29.594805   93937 ssh_runner.go:362] scp /tmp/build.1897042160.tar --> /var/lib/minikube/build/build.1897042160.tar (3072 bytes)
I0920 18:06:29.628282   93937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1897042160
I0920 18:06:29.640278   93937 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1897042160 -xf /var/lib/minikube/build/build.1897042160.tar
I0920 18:06:29.654655   93937 docker.go:360] Building image: /var/lib/minikube/build/build.1897042160
I0920 18:06:29.654715   93937 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-698761 /var/lib/minikube/build/build.1897042160
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e714644bf74739516831c77df0476e99c14882107d09abe4bbe5e42678c3c1c4 done
#8 naming to localhost/my-image:functional-698761 0.0s done
#8 DONE 0.1s
I0920 18:06:31.636105   93937 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-698761 /var/lib/minikube/build/build.1897042160: (1.9813598s)
I0920 18:06:31.636191   93937 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1897042160
I0920 18:06:31.649386   93937 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1897042160.tar
I0920 18:06:31.663706   93937 build_images.go:217] Built localhost/my-image:functional-698761 from /tmp/build.1897042160.tar
I0920 18:06:31.663747   93937 build_images.go:133] succeeded building to: functional-698761
I0920 18:06:31.663754   93937 build_images.go:134] failed building to: 
I0920 18:06:31.663784   93937 main.go:141] libmachine: Making call to close driver server
I0920 18:06:31.663802   93937 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:31.664114   93937 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:31.664129   93937 main.go:141] libmachine: (functional-698761) DBG | Closing plugin on server side
I0920 18:06:31.664134   93937 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 18:06:31.664145   93937 main.go:141] libmachine: Making call to close driver server
I0920 18:06:31.664152   93937 main.go:141] libmachine: (functional-698761) Calling .Close
I0920 18:06:31.664388   93937 main.go:141] libmachine: Successfully made call to close driver server
I0920 18:06:31.664399   93937 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-698761
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image load --daemon kicbase/echo-server:functional-698761 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image load --daemon kicbase/echo-server:functional-698761 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-698761
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image load --daemon kicbase/echo-server:functional-698761 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image save kicbase/echo-server:functional-698761 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image rm kicbase/echo-server:functional-698761 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-698761
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-698761 image save --daemon kicbase/echo-server:functional-698761 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-698761
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-698761
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-698761
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-698761
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (257.81s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-806572 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0920 18:51:05.718072   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-806572 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m44.740490753s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-806572 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-806572 cache add gcr.io/k8s-minikube/gvisor-addon:2: (19.93447555s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-806572 addons enable gvisor
E0920 18:52:39.578901   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-806572 addons enable gvisor: (3.16946255s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [83a662b8-8001-4f32-97d9-a01ab62cca53] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004941866s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-806572 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [cc7ff9dd-cd71-4655-a8d1-8102fba17d46] Pending
helpers_test.go:344: "nginx-gvisor" [cc7ff9dd-cd71-4655-a8d1-8102fba17d46] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [cc7ff9dd-cd71-4655-a8d1-8102fba17d46] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 26.006410132s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-806572
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-806572: (7.873799604s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-806572 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-806572 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m17.819260349s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [83a662b8-8001-4f32-97d9-a01ab62cca53] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
E0920 18:54:42.462318   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "gvisor" [83a662b8-8001-4f32-97d9-a01ab62cca53] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003914192s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [cc7ff9dd-cd71-4655-a8d1-8102fba17d46] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.004406792s
helpers_test.go:175: Cleaning up "gvisor-806572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-806572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-806572: (1.086585669s)
--- PASS: TestGvisorAddon (257.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587695 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0920 18:06:54.849038   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:56.131286   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:58.693513   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:07:03.815564   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:07:14.057141   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:07:34.539276   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:08:15.500917   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:09:37.422745   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-587695 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m36.026013072s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (216.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-587695 -- rollout status deployment/busybox: (3.469960953s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-lqsvj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-m5k7f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-w7nvh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-lqsvj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-m5k7f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-w7nvh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-lqsvj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-m5k7f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-w7nvh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-lqsvj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-lqsvj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-m5k7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-m5k7f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-w7nvh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587695 -- exec busybox-7dff88458-w7nvh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-587695 -v=7 --alsologtostderr
E0920 18:11:05.716624   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:05.723014   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:05.734411   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:05.755779   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:05.797510   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:05.879137   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:06.040823   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:06.362421   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:07.004374   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:08.285987   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:10.848228   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:15.970323   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:26.212071   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-587695 -v=7 --alsologtostderr: (1m3.094173788s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-587695 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp testdata/cp-test.txt ha-587695:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile218036332/001/cp-test_ha-587695.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695:/home/docker/cp-test.txt ha-587695-m02:/home/docker/cp-test_ha-587695_ha-587695-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test_ha-587695_ha-587695-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695:/home/docker/cp-test.txt ha-587695-m03:/home/docker/cp-test_ha-587695_ha-587695-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test_ha-587695_ha-587695-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695:/home/docker/cp-test.txt ha-587695-m04:/home/docker/cp-test_ha-587695_ha-587695-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test_ha-587695_ha-587695-m04.txt"
E0920 18:11:46.693685   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp testdata/cp-test.txt ha-587695-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile218036332/001/cp-test_ha-587695-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m02:/home/docker/cp-test.txt ha-587695:/home/docker/cp-test_ha-587695-m02_ha-587695.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test_ha-587695-m02_ha-587695.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m02:/home/docker/cp-test.txt ha-587695-m03:/home/docker/cp-test_ha-587695-m02_ha-587695-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test_ha-587695-m02_ha-587695-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m02:/home/docker/cp-test.txt ha-587695-m04:/home/docker/cp-test_ha-587695-m02_ha-587695-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test_ha-587695-m02_ha-587695-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp testdata/cp-test.txt ha-587695-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile218036332/001/cp-test_ha-587695-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m03:/home/docker/cp-test.txt ha-587695:/home/docker/cp-test_ha-587695-m03_ha-587695.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test_ha-587695-m03_ha-587695.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m03:/home/docker/cp-test.txt ha-587695-m02:/home/docker/cp-test_ha-587695-m03_ha-587695-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test_ha-587695-m03_ha-587695-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m03:/home/docker/cp-test.txt ha-587695-m04:/home/docker/cp-test_ha-587695-m03_ha-587695-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test_ha-587695-m03_ha-587695-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp testdata/cp-test.txt ha-587695-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile218036332/001/cp-test_ha-587695-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m04:/home/docker/cp-test.txt ha-587695:/home/docker/cp-test_ha-587695-m04_ha-587695.txt
E0920 18:11:53.561413   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695 "sudo cat /home/docker/cp-test_ha-587695-m04_ha-587695.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m04:/home/docker/cp-test.txt ha-587695-m02:/home/docker/cp-test_ha-587695-m04_ha-587695-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m02 "sudo cat /home/docker/cp-test_ha-587695-m04_ha-587695-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 cp ha-587695-m04:/home/docker/cp-test.txt ha-587695-m03:/home/docker/cp-test_ha-587695-m04_ha-587695-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 ssh -n ha-587695-m03 "sudo cat /home/docker/cp-test_ha-587695-m04_ha-587695-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-587695 node stop m02 -v=7 --alsologtostderr: (13.308249128s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr: exit status 7 (627.286111ms)

                                                
                                                
-- stdout --
	ha-587695
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-587695-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587695-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-587695-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:12:09.007522   98536 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:09.007782   98536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:09.007792   98536 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:09.007796   98536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:09.007969   98536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 18:12:09.008153   98536 out.go:352] Setting JSON to false
	I0920 18:12:09.008187   98536 mustload.go:65] Loading cluster: ha-587695
	I0920 18:12:09.008314   98536 notify.go:220] Checking for updates...
	I0920 18:12:09.008758   98536 config.go:182] Loaded profile config "ha-587695": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:12:09.008792   98536 status.go:174] checking status of ha-587695 ...
	I0920 18:12:09.009245   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.009313   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.024412   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I0920 18:12:09.024884   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.025533   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.025563   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.025887   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.026040   98536 main.go:141] libmachine: (ha-587695) Calling .GetState
	I0920 18:12:09.027622   98536 status.go:364] ha-587695 host status = "Running" (err=<nil>)
	I0920 18:12:09.027639   98536 host.go:66] Checking if "ha-587695" exists ...
	I0920 18:12:09.027929   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.027983   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.042359   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39041
	I0920 18:12:09.042732   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.043180   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.043199   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.043565   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.043761   98536 main.go:141] libmachine: (ha-587695) Calling .GetIP
	I0920 18:12:09.046476   98536 main.go:141] libmachine: (ha-587695) DBG | domain ha-587695 has defined MAC address 52:54:00:20:8d:a1 in network mk-ha-587695
	I0920 18:12:09.046982   98536 main.go:141] libmachine: (ha-587695) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:8d:a1", ip: ""} in network mk-ha-587695: {Iface:virbr1 ExpiryTime:2024-09-20 19:07:08 +0000 UTC Type:0 Mac:52:54:00:20:8d:a1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-587695 Clientid:01:52:54:00:20:8d:a1}
	I0920 18:12:09.047008   98536 main.go:141] libmachine: (ha-587695) DBG | domain ha-587695 has defined IP address 192.168.39.96 and MAC address 52:54:00:20:8d:a1 in network mk-ha-587695
	I0920 18:12:09.047238   98536 host.go:66] Checking if "ha-587695" exists ...
	I0920 18:12:09.047600   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.047646   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.062271   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I0920 18:12:09.062628   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.063075   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.063101   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.063403   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.063563   98536 main.go:141] libmachine: (ha-587695) Calling .DriverName
	I0920 18:12:09.063715   98536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:12:09.063739   98536 main.go:141] libmachine: (ha-587695) Calling .GetSSHHostname
	I0920 18:12:09.066610   98536 main.go:141] libmachine: (ha-587695) DBG | domain ha-587695 has defined MAC address 52:54:00:20:8d:a1 in network mk-ha-587695
	I0920 18:12:09.067018   98536 main.go:141] libmachine: (ha-587695) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:8d:a1", ip: ""} in network mk-ha-587695: {Iface:virbr1 ExpiryTime:2024-09-20 19:07:08 +0000 UTC Type:0 Mac:52:54:00:20:8d:a1 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-587695 Clientid:01:52:54:00:20:8d:a1}
	I0920 18:12:09.067052   98536 main.go:141] libmachine: (ha-587695) DBG | domain ha-587695 has defined IP address 192.168.39.96 and MAC address 52:54:00:20:8d:a1 in network mk-ha-587695
	I0920 18:12:09.067290   98536 main.go:141] libmachine: (ha-587695) Calling .GetSSHPort
	I0920 18:12:09.067457   98536 main.go:141] libmachine: (ha-587695) Calling .GetSSHKeyPath
	I0920 18:12:09.067628   98536 main.go:141] libmachine: (ha-587695) Calling .GetSSHUsername
	I0920 18:12:09.067896   98536 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/ha-587695/id_rsa Username:docker}
	I0920 18:12:09.149280   98536 ssh_runner.go:195] Run: systemctl --version
	I0920 18:12:09.155275   98536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:12:09.169600   98536 kubeconfig.go:125] found "ha-587695" server: "https://192.168.39.254:8443"
	I0920 18:12:09.169638   98536 api_server.go:166] Checking apiserver status ...
	I0920 18:12:09.169684   98536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:12:09.183877   98536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1863/cgroup
	W0920 18:12:09.199287   98536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1863/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:12:09.199350   98536 ssh_runner.go:195] Run: ls
	I0920 18:12:09.204642   98536 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0920 18:12:09.210601   98536 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0920 18:12:09.210633   98536 status.go:456] ha-587695 apiserver status = Running (err=<nil>)
	I0920 18:12:09.210648   98536 status.go:176] ha-587695 status: &{Name:ha-587695 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:12:09.210676   98536 status.go:174] checking status of ha-587695-m02 ...
	I0920 18:12:09.211011   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.211058   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.225798   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0920 18:12:09.226157   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.226637   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.226675   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.227118   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.227321   98536 main.go:141] libmachine: (ha-587695-m02) Calling .GetState
	I0920 18:12:09.228974   98536 status.go:364] ha-587695-m02 host status = "Stopped" (err=<nil>)
	I0920 18:12:09.228998   98536 status.go:377] host is not running, skipping remaining checks
	I0920 18:12:09.229006   98536 status.go:176] ha-587695-m02 status: &{Name:ha-587695-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:12:09.229031   98536 status.go:174] checking status of ha-587695-m03 ...
	I0920 18:12:09.229336   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.229409   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.243714   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I0920 18:12:09.244136   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.244621   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.244643   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.244948   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.245131   98536 main.go:141] libmachine: (ha-587695-m03) Calling .GetState
	I0920 18:12:09.246616   98536 status.go:364] ha-587695-m03 host status = "Running" (err=<nil>)
	I0920 18:12:09.246635   98536 host.go:66] Checking if "ha-587695-m03" exists ...
	I0920 18:12:09.246922   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.246957   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.261344   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0920 18:12:09.261949   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.262418   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.262454   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.262801   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.263162   98536 main.go:141] libmachine: (ha-587695-m03) Calling .GetIP
	I0920 18:12:09.266069   98536 main.go:141] libmachine: (ha-587695-m03) DBG | domain ha-587695-m03 has defined MAC address 52:54:00:86:14:25 in network mk-ha-587695
	I0920 18:12:09.266469   98536 main.go:141] libmachine: (ha-587695-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:14:25", ip: ""} in network mk-ha-587695: {Iface:virbr1 ExpiryTime:2024-09-20 19:09:22 +0000 UTC Type:0 Mac:52:54:00:86:14:25 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-587695-m03 Clientid:01:52:54:00:86:14:25}
	I0920 18:12:09.266494   98536 main.go:141] libmachine: (ha-587695-m03) DBG | domain ha-587695-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:86:14:25 in network mk-ha-587695
	I0920 18:12:09.266643   98536 host.go:66] Checking if "ha-587695-m03" exists ...
	I0920 18:12:09.266983   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.267018   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.281786   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35019
	I0920 18:12:09.282269   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.282769   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.282793   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.283152   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.283367   98536 main.go:141] libmachine: (ha-587695-m03) Calling .DriverName
	I0920 18:12:09.283530   98536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:12:09.283556   98536 main.go:141] libmachine: (ha-587695-m03) Calling .GetSSHHostname
	I0920 18:12:09.286028   98536 main.go:141] libmachine: (ha-587695-m03) DBG | domain ha-587695-m03 has defined MAC address 52:54:00:86:14:25 in network mk-ha-587695
	I0920 18:12:09.286442   98536 main.go:141] libmachine: (ha-587695-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:14:25", ip: ""} in network mk-ha-587695: {Iface:virbr1 ExpiryTime:2024-09-20 19:09:22 +0000 UTC Type:0 Mac:52:54:00:86:14:25 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-587695-m03 Clientid:01:52:54:00:86:14:25}
	I0920 18:12:09.286469   98536 main.go:141] libmachine: (ha-587695-m03) DBG | domain ha-587695-m03 has defined IP address 192.168.39.115 and MAC address 52:54:00:86:14:25 in network mk-ha-587695
	I0920 18:12:09.286616   98536 main.go:141] libmachine: (ha-587695-m03) Calling .GetSSHPort
	I0920 18:12:09.286792   98536 main.go:141] libmachine: (ha-587695-m03) Calling .GetSSHKeyPath
	I0920 18:12:09.286995   98536 main.go:141] libmachine: (ha-587695-m03) Calling .GetSSHUsername
	I0920 18:12:09.287181   98536 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/ha-587695-m03/id_rsa Username:docker}
	I0920 18:12:09.365050   98536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:12:09.385692   98536 kubeconfig.go:125] found "ha-587695" server: "https://192.168.39.254:8443"
	I0920 18:12:09.385728   98536 api_server.go:166] Checking apiserver status ...
	I0920 18:12:09.385780   98536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:12:09.401955   98536 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1734/cgroup
	W0920 18:12:09.412578   98536 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1734/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:12:09.412635   98536 ssh_runner.go:195] Run: ls
	I0920 18:12:09.419457   98536 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0920 18:12:09.427989   98536 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0920 18:12:09.428012   98536 status.go:456] ha-587695-m03 apiserver status = Running (err=<nil>)
	I0920 18:12:09.428021   98536 status.go:176] ha-587695-m03 status: &{Name:ha-587695-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:12:09.428036   98536 status.go:174] checking status of ha-587695-m04 ...
	I0920 18:12:09.428373   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.428411   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.443514   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0920 18:12:09.443988   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.444430   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.444455   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.444771   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.444967   98536 main.go:141] libmachine: (ha-587695-m04) Calling .GetState
	I0920 18:12:09.446473   98536 status.go:364] ha-587695-m04 host status = "Running" (err=<nil>)
	I0920 18:12:09.446492   98536 host.go:66] Checking if "ha-587695-m04" exists ...
	I0920 18:12:09.446758   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.446791   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.461773   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I0920 18:12:09.462144   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.462597   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.462615   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.462898   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.463122   98536 main.go:141] libmachine: (ha-587695-m04) Calling .GetIP
	I0920 18:12:09.465665   98536 main.go:141] libmachine: (ha-587695-m04) DBG | domain ha-587695-m04 has defined MAC address 52:54:00:55:ce:4f in network mk-ha-587695
	I0920 18:12:09.466070   98536 main.go:141] libmachine: (ha-587695-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ce:4f", ip: ""} in network mk-ha-587695: {Iface:virbr1 ExpiryTime:2024-09-20 19:10:53 +0000 UTC Type:0 Mac:52:54:00:55:ce:4f Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-587695-m04 Clientid:01:52:54:00:55:ce:4f}
	I0920 18:12:09.466095   98536 main.go:141] libmachine: (ha-587695-m04) DBG | domain ha-587695-m04 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:ce:4f in network mk-ha-587695
	I0920 18:12:09.466220   98536 host.go:66] Checking if "ha-587695-m04" exists ...
	I0920 18:12:09.466511   98536 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:12:09.466555   98536 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:12:09.480528   98536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35257
	I0920 18:12:09.480872   98536 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:12:09.481311   98536 main.go:141] libmachine: Using API Version  1
	I0920 18:12:09.481333   98536 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:12:09.481654   98536 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:12:09.481869   98536 main.go:141] libmachine: (ha-587695-m04) Calling .DriverName
	I0920 18:12:09.482051   98536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:12:09.482071   98536 main.go:141] libmachine: (ha-587695-m04) Calling .GetSSHHostname
	I0920 18:12:09.485173   98536 main.go:141] libmachine: (ha-587695-m04) DBG | domain ha-587695-m04 has defined MAC address 52:54:00:55:ce:4f in network mk-ha-587695
	I0920 18:12:09.485607   98536 main.go:141] libmachine: (ha-587695-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:ce:4f", ip: ""} in network mk-ha-587695: {Iface:virbr1 ExpiryTime:2024-09-20 19:10:53 +0000 UTC Type:0 Mac:52:54:00:55:ce:4f Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-587695-m04 Clientid:01:52:54:00:55:ce:4f}
	I0920 18:12:09.485640   98536 main.go:141] libmachine: (ha-587695-m04) DBG | domain ha-587695-m04 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:ce:4f in network mk-ha-587695
	I0920 18:12:09.485733   98536 main.go:141] libmachine: (ha-587695-m04) Calling .GetSSHPort
	I0920 18:12:09.485904   98536 main.go:141] libmachine: (ha-587695-m04) Calling .GetSSHKeyPath
	I0920 18:12:09.486054   98536 main.go:141] libmachine: (ha-587695-m04) Calling .GetSSHUsername
	I0920 18:12:09.486143   98536 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/ha-587695-m04/id_rsa Username:docker}
	I0920 18:12:09.575902   98536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:12:09.591273   98536 status.go:176] ha-587695-m04 status: &{Name:ha-587695-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 node start m02 -v=7 --alsologtostderr
E0920 18:12:21.269127   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:12:27.655881   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-587695 node start m02 -v=7 --alsologtostderr: (43.364488739s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (301.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-587695 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-587695 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-587695 -v=7 --alsologtostderr: (41.613673146s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587695 --wait=true -v=7 --alsologtostderr
E0920 18:13:49.577600   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:05.716821   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:33.419955   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:53.561464   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-587695 --wait=true -v=7 --alsologtostderr: (4m19.923188175s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-587695
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (301.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-587695 node delete m03 -v=7 --alsologtostderr: (6.410876023s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-587695 stop -v=7 --alsologtostderr: (37.592395493s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr: exit status 7 (100.418989ms)

                                                
                                                
-- stdout --
	ha-587695
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587695-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587695-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:18:42.403983  101628 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:18:42.404108  101628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:42.404120  101628 out.go:358] Setting ErrFile to fd 2...
	I0920 18:18:42.404126  101628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:18:42.404332  101628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 18:18:42.404497  101628 out.go:352] Setting JSON to false
	I0920 18:18:42.404533  101628 mustload.go:65] Loading cluster: ha-587695
	I0920 18:18:42.404652  101628 notify.go:220] Checking for updates...
	I0920 18:18:42.405089  101628 config.go:182] Loaded profile config "ha-587695": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:18:42.405120  101628 status.go:174] checking status of ha-587695 ...
	I0920 18:18:42.405754  101628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:18:42.405803  101628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:42.423691  101628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46863
	I0920 18:18:42.424074  101628 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:42.424693  101628 main.go:141] libmachine: Using API Version  1
	I0920 18:18:42.424713  101628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:42.425128  101628 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:42.425340  101628 main.go:141] libmachine: (ha-587695) Calling .GetState
	I0920 18:18:42.427163  101628 status.go:364] ha-587695 host status = "Stopped" (err=<nil>)
	I0920 18:18:42.427179  101628 status.go:377] host is not running, skipping remaining checks
	I0920 18:18:42.427187  101628 status.go:176] ha-587695 status: &{Name:ha-587695 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:18:42.427234  101628 status.go:174] checking status of ha-587695-m02 ...
	I0920 18:18:42.427628  101628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:18:42.427712  101628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:42.441811  101628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0920 18:18:42.442241  101628 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:42.442690  101628 main.go:141] libmachine: Using API Version  1
	I0920 18:18:42.442708  101628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:42.442991  101628 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:42.443143  101628 main.go:141] libmachine: (ha-587695-m02) Calling .GetState
	I0920 18:18:42.444411  101628 status.go:364] ha-587695-m02 host status = "Stopped" (err=<nil>)
	I0920 18:18:42.444425  101628 status.go:377] host is not running, skipping remaining checks
	I0920 18:18:42.444432  101628 status.go:176] ha-587695-m02 status: &{Name:ha-587695-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:18:42.444450  101628 status.go:174] checking status of ha-587695-m04 ...
	I0920 18:18:42.444757  101628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:18:42.444803  101628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:18:42.458880  101628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I0920 18:18:42.459313  101628 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:18:42.459788  101628 main.go:141] libmachine: Using API Version  1
	I0920 18:18:42.459822  101628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:18:42.460117  101628 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:18:42.460453  101628 main.go:141] libmachine: (ha-587695-m04) Calling .GetState
	I0920 18:18:42.461930  101628 status.go:364] ha-587695-m04 host status = "Stopped" (err=<nil>)
	I0920 18:18:42.461951  101628 status.go:377] host is not running, skipping remaining checks
	I0920 18:18:42.461958  101628 status.go:176] ha-587695-m04 status: &{Name:ha-587695-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (163.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587695 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0920 18:21:05.716651   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-587695 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m43.150270203s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (163.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-587695 --control-plane -v=7 --alsologtostderr
E0920 18:21:53.562404   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-587695 --control-plane -v=7 --alsologtostderr: (1m24.233075844s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-587695 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-170871 --driver=kvm2 
E0920 18:23:16.630819   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-170871 --driver=kvm2 : (51.417743495s)
--- PASS: TestImageBuild/serial/Setup (51.42s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-170871
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-170871: (1.470597734s)
--- PASS: TestImageBuild/serial/NormalBuild (1.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-170871
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-170871
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-170871
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-116039 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-116039 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m29.403254977s)
--- PASS: TestJSONOutput/start/Command (89.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-116039 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-116039 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.55s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-116039 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-116039 --output=json --user=testUser: (7.550019163s)
--- PASS: TestJSONOutput/stop/Command (7.55s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-410126 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-410126 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.424157ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d501b3d0-5278-4516-a8ce-38cb6b2c6c56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410126] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6e5b3cb-091b-4498-aa6d-5d470740452d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"0a70e4fa-888b-4dc6-a6d2-57e380aad887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba2986ac-5e8e-426d-9c17-ca16c0d45137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig"}}
	{"specversion":"1.0","id":"2464eee1-fefe-49e3-b5a6-08779ad4d21c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube"}}
	{"specversion":"1.0","id":"f1c67822-5264-4ff8-a4fe-c44fe91e6be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"51b84c47-8600-4c6a-969d-432d72d4c4ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c09e0ec-6786-4259-b746-482a0079c6d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-410126
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (102.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-096703 --driver=kvm2 
E0920 18:26:05.718505   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-096703 --driver=kvm2 : (50.383080717s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-109046 --driver=kvm2 
E0920 18:26:53.563558   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-109046 --driver=kvm2 : (49.563828443s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-096703
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-109046
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-109046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-109046
helpers_test.go:175: Cleaning up "first-096703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-096703
--- PASS: TestMinikubeProfile (102.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-726151 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0920 18:27:28.781548   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-726151 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.192656774s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-726151 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-726151 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-745007 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-745007 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.302035256s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745007 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745007 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-726151 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745007 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745007 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-745007
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-745007: (2.274516454s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-745007
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-745007: (25.487763959s)
--- PASS: TestMountStart/serial/RestartStopped (26.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745007 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745007 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-761106 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-761106 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m9.059647719s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-761106 -- rollout status deployment/busybox: (1.716418125s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-bz72h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-xwhhw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-bz72h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-xwhhw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-bz72h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-xwhhw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-bz72h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-bz72h -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-xwhhw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-761106 -- exec busybox-7dff88458-xwhhw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-761106 -v 3 --alsologtostderr
E0920 18:31:05.716963   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:31:53.561864   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-761106 -v 3 --alsologtostderr: (58.544407539s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-761106 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp testdata/cp-test.txt multinode-761106:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2399687127/001/cp-test_multinode-761106.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106:/home/docker/cp-test.txt multinode-761106-m02:/home/docker/cp-test_multinode-761106_multinode-761106-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m02 "sudo cat /home/docker/cp-test_multinode-761106_multinode-761106-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106:/home/docker/cp-test.txt multinode-761106-m03:/home/docker/cp-test_multinode-761106_multinode-761106-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m03 "sudo cat /home/docker/cp-test_multinode-761106_multinode-761106-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp testdata/cp-test.txt multinode-761106-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2399687127/001/cp-test_multinode-761106-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106-m02:/home/docker/cp-test.txt multinode-761106:/home/docker/cp-test_multinode-761106-m02_multinode-761106.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106 "sudo cat /home/docker/cp-test_multinode-761106-m02_multinode-761106.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106-m02:/home/docker/cp-test.txt multinode-761106-m03:/home/docker/cp-test_multinode-761106-m02_multinode-761106-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m03 "sudo cat /home/docker/cp-test_multinode-761106-m02_multinode-761106-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp testdata/cp-test.txt multinode-761106-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2399687127/001/cp-test_multinode-761106-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106-m03:/home/docker/cp-test.txt multinode-761106:/home/docker/cp-test_multinode-761106-m03_multinode-761106.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106 "sudo cat /home/docker/cp-test_multinode-761106-m03_multinode-761106.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 cp multinode-761106-m03:/home/docker/cp-test.txt multinode-761106-m02:/home/docker/cp-test_multinode-761106-m03_multinode-761106-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 ssh -n multinode-761106-m02 "sudo cat /home/docker/cp-test_multinode-761106-m03_multinode-761106-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-761106 node stop m03: (2.503517593s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-761106 status: exit status 7 (405.519133ms)

                                                
                                                
-- stdout --
	multinode-761106
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-761106-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-761106-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr: exit status 7 (419.63703ms)

                                                
                                                
-- stdout --
	multinode-761106
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-761106-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-761106-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:32:11.130228  110273 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:32:11.130352  110273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:32:11.130364  110273 out.go:358] Setting ErrFile to fd 2...
	I0920 18:32:11.130370  110273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:32:11.130636  110273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 18:32:11.130880  110273 out.go:352] Setting JSON to false
	I0920 18:32:11.130929  110273 mustload.go:65] Loading cluster: multinode-761106
	I0920 18:32:11.131036  110273 notify.go:220] Checking for updates...
	I0920 18:32:11.131430  110273 config.go:182] Loaded profile config "multinode-761106": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:32:11.131459  110273 status.go:174] checking status of multinode-761106 ...
	I0920 18:32:11.131881  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.131953  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.147219  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0920 18:32:11.147661  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.148228  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.148248  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.148627  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.148834  110273 main.go:141] libmachine: (multinode-761106) Calling .GetState
	I0920 18:32:11.150223  110273 status.go:364] multinode-761106 host status = "Running" (err=<nil>)
	I0920 18:32:11.150241  110273 host.go:66] Checking if "multinode-761106" exists ...
	I0920 18:32:11.150558  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.150621  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.165158  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0920 18:32:11.165579  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.166093  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.166129  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.166499  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.166680  110273 main.go:141] libmachine: (multinode-761106) Calling .GetIP
	I0920 18:32:11.169971  110273 main.go:141] libmachine: (multinode-761106) DBG | domain multinode-761106 has defined MAC address 52:54:00:d9:6d:09 in network mk-multinode-761106
	I0920 18:32:11.170413  110273 main.go:141] libmachine: (multinode-761106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:6d:09", ip: ""} in network mk-multinode-761106: {Iface:virbr1 ExpiryTime:2024-09-20 19:29:02 +0000 UTC Type:0 Mac:52:54:00:d9:6d:09 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-761106 Clientid:01:52:54:00:d9:6d:09}
	I0920 18:32:11.170448  110273 main.go:141] libmachine: (multinode-761106) DBG | domain multinode-761106 has defined IP address 192.168.39.125 and MAC address 52:54:00:d9:6d:09 in network mk-multinode-761106
	I0920 18:32:11.170628  110273 host.go:66] Checking if "multinode-761106" exists ...
	I0920 18:32:11.171039  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.171088  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.185729  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
	I0920 18:32:11.186236  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.186648  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.186666  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.186949  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.187117  110273 main.go:141] libmachine: (multinode-761106) Calling .DriverName
	I0920 18:32:11.187310  110273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:32:11.187345  110273 main.go:141] libmachine: (multinode-761106) Calling .GetSSHHostname
	I0920 18:32:11.189856  110273 main.go:141] libmachine: (multinode-761106) DBG | domain multinode-761106 has defined MAC address 52:54:00:d9:6d:09 in network mk-multinode-761106
	I0920 18:32:11.190331  110273 main.go:141] libmachine: (multinode-761106) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:6d:09", ip: ""} in network mk-multinode-761106: {Iface:virbr1 ExpiryTime:2024-09-20 19:29:02 +0000 UTC Type:0 Mac:52:54:00:d9:6d:09 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:multinode-761106 Clientid:01:52:54:00:d9:6d:09}
	I0920 18:32:11.190365  110273 main.go:141] libmachine: (multinode-761106) DBG | domain multinode-761106 has defined IP address 192.168.39.125 and MAC address 52:54:00:d9:6d:09 in network mk-multinode-761106
	I0920 18:32:11.190529  110273 main.go:141] libmachine: (multinode-761106) Calling .GetSSHPort
	I0920 18:32:11.190665  110273 main.go:141] libmachine: (multinode-761106) Calling .GetSSHKeyPath
	I0920 18:32:11.190816  110273 main.go:141] libmachine: (multinode-761106) Calling .GetSSHUsername
	I0920 18:32:11.190949  110273 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/multinode-761106/id_rsa Username:docker}
	I0920 18:32:11.273093  110273 ssh_runner.go:195] Run: systemctl --version
	I0920 18:32:11.278841  110273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:32:11.294097  110273 kubeconfig.go:125] found "multinode-761106" server: "https://192.168.39.125:8443"
	I0920 18:32:11.294131  110273 api_server.go:166] Checking apiserver status ...
	I0920 18:32:11.294170  110273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:32:11.307801  110273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1814/cgroup
	W0920 18:32:11.317216  110273 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1814/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 18:32:11.317265  110273 ssh_runner.go:195] Run: ls
	I0920 18:32:11.321508  110273 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0920 18:32:11.325690  110273 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0920 18:32:11.325711  110273 status.go:456] multinode-761106 apiserver status = Running (err=<nil>)
	I0920 18:32:11.325721  110273 status.go:176] multinode-761106 status: &{Name:multinode-761106 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:32:11.325736  110273 status.go:174] checking status of multinode-761106-m02 ...
	I0920 18:32:11.326034  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.326068  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.341418  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43913
	I0920 18:32:11.341946  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.342419  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.342436  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.342749  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.342939  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .GetState
	I0920 18:32:11.344522  110273 status.go:364] multinode-761106-m02 host status = "Running" (err=<nil>)
	I0920 18:32:11.344538  110273 host.go:66] Checking if "multinode-761106-m02" exists ...
	I0920 18:32:11.344850  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.344891  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.359691  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34515
	I0920 18:32:11.360183  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.360700  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.360721  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.361126  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.361309  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .GetIP
	I0920 18:32:11.364252  110273 main.go:141] libmachine: (multinode-761106-m02) DBG | domain multinode-761106-m02 has defined MAC address 52:54:00:b2:11:8b in network mk-multinode-761106
	I0920 18:32:11.364731  110273 main.go:141] libmachine: (multinode-761106-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:11:8b", ip: ""} in network mk-multinode-761106: {Iface:virbr1 ExpiryTime:2024-09-20 19:30:15 +0000 UTC Type:0 Mac:52:54:00:b2:11:8b Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-761106-m02 Clientid:01:52:54:00:b2:11:8b}
	I0920 18:32:11.364771  110273 main.go:141] libmachine: (multinode-761106-m02) DBG | domain multinode-761106-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:b2:11:8b in network mk-multinode-761106
	I0920 18:32:11.364890  110273 host.go:66] Checking if "multinode-761106-m02" exists ...
	I0920 18:32:11.365290  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.365347  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.380429  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0920 18:32:11.380887  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.381302  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.381327  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.381680  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.381892  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .DriverName
	I0920 18:32:11.382116  110273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:32:11.382137  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .GetSSHHostname
	I0920 18:32:11.384731  110273 main.go:141] libmachine: (multinode-761106-m02) DBG | domain multinode-761106-m02 has defined MAC address 52:54:00:b2:11:8b in network mk-multinode-761106
	I0920 18:32:11.385107  110273 main.go:141] libmachine: (multinode-761106-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:11:8b", ip: ""} in network mk-multinode-761106: {Iface:virbr1 ExpiryTime:2024-09-20 19:30:15 +0000 UTC Type:0 Mac:52:54:00:b2:11:8b Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:multinode-761106-m02 Clientid:01:52:54:00:b2:11:8b}
	I0920 18:32:11.385150  110273 main.go:141] libmachine: (multinode-761106-m02) DBG | domain multinode-761106-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:b2:11:8b in network mk-multinode-761106
	I0920 18:32:11.385281  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .GetSSHPort
	I0920 18:32:11.385434  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .GetSSHKeyPath
	I0920 18:32:11.385583  110273 main.go:141] libmachine: (multinode-761106-m02) Calling .GetSSHUsername
	I0920 18:32:11.385755  110273 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19678-76160/.minikube/machines/multinode-761106-m02/id_rsa Username:docker}
	I0920 18:32:11.472380  110273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:32:11.486808  110273 status.go:176] multinode-761106-m02 status: &{Name:multinode-761106-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:32:11.486851  110273 status.go:174] checking status of multinode-761106-m03 ...
	I0920 18:32:11.487187  110273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:32:11.487232  110273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:32:11.502539  110273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0920 18:32:11.503035  110273 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:32:11.503485  110273 main.go:141] libmachine: Using API Version  1
	I0920 18:32:11.503508  110273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:32:11.503858  110273 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:32:11.504067  110273 main.go:141] libmachine: (multinode-761106-m03) Calling .GetState
	I0920 18:32:11.505473  110273 status.go:364] multinode-761106-m03 host status = "Stopped" (err=<nil>)
	I0920 18:32:11.505489  110273 status.go:377] host is not running, skipping remaining checks
	I0920 18:32:11.505495  110273 status.go:176] multinode-761106-m03 status: &{Name:multinode-761106-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-761106 node start m03 -v=7 --alsologtostderr: (41.482867054s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (193.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-761106
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-761106
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-761106: (28.202056369s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-761106 --wait=true -v=8 --alsologtostderr
E0920 18:36:05.717238   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-761106 --wait=true -v=8 --alsologtostderr: (2m45.383864714s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-761106
--- PASS: TestMultiNode/serial/RestartKeepsNodes (193.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-761106 node delete m03: (1.54120103s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-761106 stop: (25.690878275s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-761106 status: exit status 7 (80.565398ms)

                                                
                                                
-- stdout --
	multinode-761106
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-761106-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr: exit status 7 (82.484092ms)

                                                
                                                
-- stdout --
	multinode-761106
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-761106-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:36:35.149612  112091 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:36:35.149880  112091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:35.149891  112091 out.go:358] Setting ErrFile to fd 2...
	I0920 18:36:35.149895  112091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:36:35.150094  112091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-76160/.minikube/bin
	I0920 18:36:35.150304  112091 out.go:352] Setting JSON to false
	I0920 18:36:35.150342  112091 mustload.go:65] Loading cluster: multinode-761106
	I0920 18:36:35.150437  112091 notify.go:220] Checking for updates...
	I0920 18:36:35.150964  112091 config.go:182] Loaded profile config "multinode-761106": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:36:35.151000  112091 status.go:174] checking status of multinode-761106 ...
	I0920 18:36:35.151515  112091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:36:35.151575  112091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:35.170074  112091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I0920 18:36:35.170518  112091 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:35.171212  112091 main.go:141] libmachine: Using API Version  1
	I0920 18:36:35.171255  112091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:35.171623  112091 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:35.171828  112091 main.go:141] libmachine: (multinode-761106) Calling .GetState
	I0920 18:36:35.173348  112091 status.go:364] multinode-761106 host status = "Stopped" (err=<nil>)
	I0920 18:36:35.173359  112091 status.go:377] host is not running, skipping remaining checks
	I0920 18:36:35.173364  112091 status.go:176] multinode-761106 status: &{Name:multinode-761106 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:36:35.173430  112091 status.go:174] checking status of multinode-761106-m02 ...
	I0920 18:36:35.173709  112091 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0920 18:36:35.173751  112091 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 18:36:35.187953  112091 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I0920 18:36:35.188394  112091 main.go:141] libmachine: () Calling .GetVersion
	I0920 18:36:35.188818  112091 main.go:141] libmachine: Using API Version  1
	I0920 18:36:35.188839  112091 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 18:36:35.189178  112091 main.go:141] libmachine: () Calling .GetMachineName
	I0920 18:36:35.189309  112091 main.go:141] libmachine: (multinode-761106-m02) Calling .GetState
	I0920 18:36:35.190811  112091 status.go:364] multinode-761106-m02 host status = "Stopped" (err=<nil>)
	I0920 18:36:35.190829  112091 status.go:377] host is not running, skipping remaining checks
	I0920 18:36:35.190835  112091 status.go:176] multinode-761106-m02 status: &{Name:multinode-761106-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-761106 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0920 18:36:53.561782   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-761106 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m58.598046971s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-761106 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-761106
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-761106-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-761106-m02 --driver=kvm2 : exit status 14 (59.427704ms)

                                                
                                                
-- stdout --
	* [multinode-761106-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-761106-m02' is duplicated with machine name 'multinode-761106-m02' in profile 'multinode-761106'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-761106-m03 --driver=kvm2 
E0920 18:39:56.632623   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-761106-m03 --driver=kvm2 : (50.437382802s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-761106
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-761106: exit status 80 (202.294246ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-761106 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-761106-m03 already exists in multinode-761106-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-761106-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-761106-m03: (1.018143299s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.76s)

                                                
                                    
x
+
TestPreload (156.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-805128 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0920 18:41:05.716533   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-805128 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m23.472353524s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-805128 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-805128
E0920 18:41:53.562285   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-805128: (12.551642403s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-805128 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-805128 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (58.78112475s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-805128 image list
helpers_test.go:175: Cleaning up "test-preload-805128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-805128
--- PASS: TestPreload (156.68s)

                                                
                                    
x
+
TestScheduledStopUnix (121.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-151122 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-151122 --memory=2048 --driver=kvm2 : (50.069890859s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151122 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-151122 -n scheduled-stop-151122
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 18:43:54.884824   83346 retry.go:31] will retry after 138.822µs: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.885995   83346 retry.go:31] will retry after 160.415µs: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.887113   83346 retry.go:31] will retry after 274.801µs: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.888246   83346 retry.go:31] will retry after 390.637µs: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.889387   83346 retry.go:31] will retry after 435.209µs: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.890522   83346 retry.go:31] will retry after 536.837µs: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.891636   83346 retry.go:31] will retry after 1.463867ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.893854   83346 retry.go:31] will retry after 1.185163ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.896044   83346 retry.go:31] will retry after 1.495709ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.898243   83346 retry.go:31] will retry after 5.140225ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.904454   83346 retry.go:31] will retry after 5.645916ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.910657   83346 retry.go:31] will retry after 5.751018ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.916858   83346 retry.go:31] will retry after 11.69399ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.929166   83346 retry.go:31] will retry after 20.263993ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
I0920 18:43:54.950395   83346 retry.go:31] will retry after 36.940693ms: open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/scheduled-stop-151122/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151122 --cancel-scheduled
E0920 18:44:08.785176   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151122 -n scheduled-stop-151122
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151122
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-151122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-151122
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-151122: exit status 7 (65.076332ms)

                                                
                                                
-- stdout --
	scheduled-stop-151122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151122 -n scheduled-stop-151122
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-151122 -n scheduled-stop-151122: exit status 7 (67.465155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-151122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-151122
--- PASS: TestScheduledStopUnix (121.66s)

                                                
                                    
x
+
TestSkaffold (124.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3136477473 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-806341 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-806341 --memory=2600 --driver=kvm2 : (46.262548094s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3136477473 run --minikube-profile skaffold-806341 --kube-context skaffold-806341 --status-check=true --port-forward=false --interactive=false
E0920 18:46:05.717663   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:46:53.568557   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3136477473 run --minikube-profile skaffold-806341 --kube-context skaffold-806341 --status-check=true --port-forward=false --interactive=false: (1m5.49305554s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f4df9d9c4-wvsmj" [fa3d5464-69a6-4693-b8ce-056404263f33] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004026031s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7b6ffcd748-8pfn9" [78a36e76-eb92-4c50-9aa0-c0a5e3a8c109] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003823695s
helpers_test.go:175: Cleaning up "skaffold-806341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-806341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-806341: (1.18106246s)
--- PASS: TestSkaffold (124.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3562527996 start -p running-upgrade-254303 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3562527996 start -p running-upgrade-254303 --memory=2200 --vm-driver=kvm2 : (2m10.141024376s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-254303 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-254303 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m12.153439164s)
helpers_test.go:175: Cleaning up "running-upgrade-254303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-254303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-254303: (1.086019837s)
--- PASS: TestRunningBinaryUpgrade (203.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (187.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (53.528347479s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-441294
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-441294: (12.602008611s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-441294 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-441294 status --format={{.Host}}: exit status 7 (75.271289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (1m18.934642163s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-441294 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (79.897605ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-441294] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-441294
	    minikube start -p kubernetes-upgrade-441294 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4412942 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-441294 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-441294 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (40.582617584s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-441294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-441294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-441294: (1.160318445s)
--- PASS: TestKubernetesUpgrade (187.03s)

                                                
                                    
x
+
TestPause/serial/Start (141.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-211467 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-211467 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m21.467068224s)
--- PASS: TestPause/serial/Start (141.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-622650 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-622650 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (378.954525ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-622650] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-76160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-76160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (84.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-622650 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-622650 --driver=kvm2 : (1m24.165623191s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-622650 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (84.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-211467 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-211467 --alsologtostderr -v=1 --driver=kvm2 : (48.526854035s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-622650 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-622650 --no-kubernetes --driver=kvm2 : (16.557386036s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-622650 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-622650 status -o json: exit status 2 (261.32819ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-622650","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-622650
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-622650: (1.049884402s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (174.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2563135343 start -p stopped-upgrade-640926 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2563135343 start -p stopped-upgrade-640926 --memory=2200 --vm-driver=kvm2 : (54.684841374s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2563135343 -p stopped-upgrade-640926 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2563135343 -p stopped-upgrade-640926 stop: (13.155234269s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-640926 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0920 18:51:53.561563   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.602651   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.609033   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.620405   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.641762   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.683195   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.764663   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:58.926249   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:59.248494   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:51:59.890516   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-640926 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m46.482316497s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (174.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-622650 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-622650 --no-kubernetes --driver=kvm2 : (48.260788336s)
--- PASS: TestNoKubernetes/serial/Start (48.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-211467 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-211467 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-211467 --output=json --layout=cluster: exit status 2 (266.824303ms)

                                                
                                                
-- stdout --
	{"Name":"pause-211467","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-211467","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-211467 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-211467 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-211467 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.127707741s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-622650 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-622650 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.472802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-622650
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-622650: (2.326100257s)
--- PASS: TestNoKubernetes/serial/Stop (2.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (71.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-622650 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-622650 --driver=kvm2 : (1m11.792408592s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (71.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-622650 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-622650 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.34506ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-640926
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-640926: (1.805922923s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m18.280477437s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m39.983578609s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (122.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m2.850436258s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (122.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-670888 "pgrep -a kubelet"
I0920 18:55:58.565096   83346 config.go:182] Loaded profile config "auto-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-670888 replace --force -f testdata/netcat-deployment.yaml
I0920 18:55:59.312635   83346 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0920 18:55:59.341610   83346 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6r42h" [9a23c77d-21e3-453f-b300-a32c2c7afeb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6r42h" [9a23c77d-21e3-453f-b300-a32c2c7afeb6] Running
E0920 18:56:05.716685   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.009835093s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m37.753417818s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2z9qw" [c8905ea4-4176-48d2-b083-3d42afee7a1b] Running
E0920 18:56:36.634548   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005490765s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-670888 "pgrep -a kubelet"
I0920 18:56:38.781775   83346 config.go:182] Loaded profile config "flannel-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rt7pp" [d2f6b21f-ae09-4301-8ff6-9f16a7d29952] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rt7pp" [d2f6b21f-ae09-4301-8ff6-9f16a7d29952] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004845271s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (63.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m3.875611798s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (63.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-670888 "pgrep -a kubelet"
I0920 18:57:28.591829   83346 config.go:182] Loaded profile config "enable-default-cni-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d9vqw" [30e11351-818f-4927-9698-331b8c1d5ddd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d9vqw" [30e11351-818f-4927-9698-331b8c1d5ddd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004262165s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0920 18:58:03.082818   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/gvisor-806572/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m31.273793393s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m38.38633339s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-670888 "pgrep -a kubelet"
I0920 18:58:06.825827   83346 config.go:182] Loaded profile config "bridge-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gbqg7" [6f3329ef-27c4-436e-9970-3d0f3a04a383] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gbqg7" [6f3329ef-27c4-436e-9970-3d0f3a04a383] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004113231s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-670888 "pgrep -a kubelet"
I0920 18:58:12.531411   83346 config.go:182] Loaded profile config "kubenet-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8t7w6" [f269f6ec-c139-426c-8911-06ee872e832c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8t7w6" [f269f6ec-c139-426c-8911-06ee872e832c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004826196s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m23.833165793s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (102.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0920 18:59:04.525983   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/gvisor-806572/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-670888 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m42.513249133s)
--- PASS: TestNetworkPlugins/group/false/Start (102.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rfnwl" [ccc4f16a-ac5f-4d87-9430-6af4ff5f0c8d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006149738s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-670888 "pgrep -a kubelet"
I0920 18:59:32.641275   83346 config.go:182] Loaded profile config "calico-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-670888 replace --force -f testdata/netcat-deployment.yaml
I0920 18:59:32.853583   83346 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v5l4g" [205f169b-49a3-4e69-9e13-b3c382a33635] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v5l4g" [205f169b-49a3-4e69-9e13-b3c382a33635] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005365041s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lhlr5" [24eb89b7-7c3a-436d-a63d-f81a2a4cec5b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007093433s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-670888 "pgrep -a kubelet"
I0920 18:59:50.781980   83346 config.go:182] Loaded profile config "kindnet-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8lfgm" [3e2524b2-fddd-4397-ad77-fa9cbebb6c02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8lfgm" [3e2524b2-fddd-4397-ad77-fa9cbebb6c02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.005112036s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-670888 "pgrep -a kubelet"
I0920 19:00:00.971131   83346 config.go:182] Loaded profile config "custom-flannel-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qmmpn" [8282e603-b4ac-48ab-b27e-787ba0f37f8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qmmpn" [8282e603-b4ac-48ab-b27e-787ba0f37f8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005178411s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (171.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-161166 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-161166 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m51.38541044s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (171.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-778291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-778291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m22.347031813s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-670888 "pgrep -a kubelet"
I0920 19:00:23.338660   83346 config.go:182] Loaded profile config "false-670888": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-670888 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-frsp7" [7a0fc8fe-dfa6-4467-8d96-0136b175c713] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 19:00:26.447488   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/gvisor-806572/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-frsp7" [7a0fc8fe-dfa6-4467-8d96-0136b175c713] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.005423702s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m36.666372677s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-670888 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-670888 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)
E0920 19:07:56.509537   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:07.070509   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:07.408038   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:12.720862   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:21.666248   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:08:34.773793   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (137.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-871146 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0920 19:00:59.296157   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.302571   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.313986   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.335388   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.376792   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.458571   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.620150   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:00:59.941686   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:00.583946   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:01.866135   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:04.428163   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:05.716684   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:09.549943   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:19.791674   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.532326   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.538761   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.550222   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.571668   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.613520   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.695006   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:32.856856   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:33.178184   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:33.819945   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:35.101693   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:37.663530   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:40.273537   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:42.785476   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-871146 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (2m17.022337284s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (137.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-778291 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0de42ff3-8436-4b26-b2e9-6740bb1365cf] Pending
helpers_test.go:344: "busybox" [0de42ff3-8436-4b26-b2e9-6740bb1365cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0de42ff3-8436-4b26-b2e9-6740bb1365cf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005491123s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-778291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-778291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 19:01:53.027692   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:01:53.561913   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-778291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-778291 --alsologtostderr -v=3
E0920 19:01:58.602512   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-778291 --alsologtostderr -v=3: (13.367887717s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-070346 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9faa58ab-5743-48e2-a556-5162fdbb2f55] Pending
helpers_test.go:344: "busybox" [9faa58ab-5743-48e2-a556-5162fdbb2f55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9faa58ab-5743-48e2-a556-5162fdbb2f55] Running
E0920 19:02:13.509056   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005574476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-070346 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778291 -n no-preload-778291
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778291 -n no-preload-778291: exit status 7 (68.539964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-778291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (324.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-778291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-778291 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m24.19567741s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778291 -n no-preload-778291
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (324.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-070346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-070346 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040073258s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-070346 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-070346 --alsologtostderr -v=3
E0920 19:02:21.235907   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-070346 --alsologtostderr -v=3: (13.351181957s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070346 -n embed-certs-070346
E0920 19:02:28.806680   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:28.813060   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:28.824465   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:28.845919   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070346 -n embed-certs-070346: exit status 7 (75.149481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-070346 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0920 19:02:28.887834   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:28.970014   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (309.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0920 19:02:29.131568   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:29.453155   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:30.095105   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:31.377098   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:33.938404   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:39.060683   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:42.587329   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/gvisor-806572/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:02:49.302342   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070346 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m9.643573306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070346 -n embed-certs-070346
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (309.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-161166 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e37fe1a-bb39-4994-a57c-6cc3e172ef25] Pending
E0920 19:02:54.471341   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9e37fe1a-bb39-4994-a57c-6cc3e172ef25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e37fe1a-bb39-4994-a57c-6cc3e172ef25] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004754313s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-161166 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-161166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-161166 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-161166 --alsologtostderr -v=3
E0920 19:03:07.070554   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:07.076950   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:07.088303   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:07.109726   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:07.151162   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:07.233279   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:07.394790   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-161166 --alsologtostderr -v=3: (13.341894133s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-871146 create -f testdata/busybox.yaml
E0920 19:03:07.716922   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9fa99301-a76d-44f7-bac6-27412b7f866f] Pending
E0920 19:03:08.358366   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9fa99301-a76d-44f7-bac6-27412b7f866f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 19:03:09.640546   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:09.784049   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [9fa99301-a76d-44f7-bac6-27412b7f866f] Running
E0920 19:03:10.290445   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/gvisor-806572/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.202284   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.720981   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.727370   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.738700   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.760033   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.802069   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:12.883481   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:13.044997   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:13.367265   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:14.009276   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:15.291393   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004560588s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-871146 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-871146 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-871146 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-161166 -n old-k8s-version-161166
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-161166 -n old-k8s-version-161166: exit status 7 (78.709153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-161166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (398.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-161166 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-161166 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m38.730464104s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-161166 -n old-k8s-version-161166
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (398.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-871146 --alsologtostderr -v=3
E0920 19:03:17.324229   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:17.853560   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:22.974991   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:27.566091   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-871146 --alsologtostderr -v=3: (13.347940372s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146: exit status 7 (63.004433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-871146 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (306.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-871146 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0920 19:03:33.217206   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:43.157563   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:48.048135   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:50.745570   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:03:53.698506   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:16.393428   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.428058   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.434487   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.445975   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.467397   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.508876   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.590416   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:26.752001   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:27.073776   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:27.715832   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:28.997954   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:29.010501   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:31.560305   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:34.660438   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:36.682012   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.487801   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.494262   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.505656   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.527033   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.568425   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.649705   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:44.811559   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:45.133268   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:45.775456   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:46.923317   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:47.056842   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:49.618644   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:04:54.740823   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.220941   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.227382   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.238740   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.260095   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.301486   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.382950   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.544546   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:01.866387   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:02.508437   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:03.790362   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:04.982310   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:06.351959   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:07.405232   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:11.474159   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:12.667152   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:21.715640   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.547862   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.554300   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.565867   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.588078   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.629501   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.711094   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:23.872612   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:24.194740   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:24.836857   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:25.464715   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:26.118454   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:28.679861   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:33.801202   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:42.197542   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:44.042893   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:48.367361   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:50.931980   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/bridge-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:56.581891   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:05:59.296647   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:04.525073   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:05.716609   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/functional-698761/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:06.426569   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:23.158866   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:26.999512   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/auto-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:32.532961   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:45.486626   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/false-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:53.561641   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/addons-545460/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:06:58.602066   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/skaffold-806341/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:00.235750   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:10.289529   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:28.348075   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kindnet-670888/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:07:28.806923   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/enable-default-cni-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-871146 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m6.570470342s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (306.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fdnwv" [1a6f8d94-4968-44c2-b054-eec9dbb38319] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005564767s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fdnwv" [1a6f8d94-4968-44c2-b054-eec9dbb38319] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005034054s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-778291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qqvdt" [1e6c5b19-5b3b-4012-be2a-e727a1cacca5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qqvdt" [1e6c5b19-5b3b-4012-be2a-e727a1cacca5] Running
E0920 19:07:42.587965   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/gvisor-806572/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005343634s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-778291 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-778291 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778291 -n no-preload-778291
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778291 -n no-preload-778291: exit status 2 (235.696085ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-778291 -n no-preload-778291
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-778291 -n no-preload-778291: exit status 2 (240.63759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-778291 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778291 -n no-preload-778291
E0920 19:07:45.080205   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-778291 -n no-preload-778291
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qqvdt" [1e6c5b19-5b3b-4012-be2a-e727a1cacca5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004154566s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-070346 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-944329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-944329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m1.381064261s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-070346 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-070346 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070346 -n embed-certs-070346
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070346 -n embed-certs-070346: exit status 2 (255.567973ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070346 -n embed-certs-070346
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070346 -n embed-certs-070346: exit status 2 (249.063937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-070346 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070346 -n embed-certs-070346
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070346 -n embed-certs-070346
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9qssd" [c5615fcd-5d57-4cec-99a5-45206e6e346f] Running
E0920 19:08:40.423280   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/kubenet-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005889715s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9qssd" [c5615fcd-5d57-4cec-99a5-45206e6e346f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003766418s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-871146 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-944329 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-871146 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-871146 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146: exit status 2 (246.903864ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146: exit status 2 (253.325561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-871146 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-871146 -n default-k8s-diff-port-871146
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-944329 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-944329 --alsologtostderr -v=3: (7.866020543s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-944329 -n newest-cni-944329
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-944329 -n newest-cni-944329: exit status 7 (67.156544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-944329 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-944329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0920 19:09:26.428945   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/calico-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-944329 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (38.128495751s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-944329 -n newest-cni-944329
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-944329 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-944329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-944329 -n newest-cni-944329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-944329 -n newest-cni-944329: exit status 2 (253.886243ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-944329 -n newest-cni-944329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-944329 -n newest-cni-944329: exit status 2 (267.284004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-944329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-944329 -n newest-cni-944329
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-944329 -n newest-cni-944329
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-srdvf" [916ba8b3-3b02-4bae-a126-6f61d1676c1c] Running
E0920 19:10:01.220400   83346 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-76160/.minikube/profiles/custom-flannel-670888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004552461s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-srdvf" [916ba8b3-3b02-4bae-a126-6f61d1676c1c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004422454s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-161166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-161166 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-161166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-161166 -n old-k8s-version-161166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-161166 -n old-k8s-version-161166: exit status 2 (229.925414ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-161166 -n old-k8s-version-161166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-161166 -n old-k8s-version-161166: exit status 2 (228.72651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-161166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-161166 -n old-k8s-version-161166
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-161166 -n old-k8s-version-161166
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-670888 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-670888" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-670888

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-670888" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670888"

                                                
                                                
----------------------- debugLogs end: cilium-670888 [took: 3.388447954s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-670888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-670888
--- SKIP: TestNetworkPlugins/group/cilium (3.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-650699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-650699
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard