Test Report: KVM_Linux 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (1/340)

Order failed test Duration
33 TestAddons/parallel/Registry 73.78
x
+
TestAddons/parallel/Registry (73.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.294802ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0927 00:27:20.380760   22114 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:27:20.380793   22114 kapi.go:107] duration metric: took 7.407313ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-fwsrk" [6f46ca63-ee6e-40a2-847d-0027eb2fd753] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00383329s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4k4lw" [a0e1613c-f205-4152-b2b1-2310f2f418b0] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004311464s
addons_test.go:338: (dbg) Run:  kubectl --context addons-921129 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-921129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-921129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.092314439s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-921129 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 ip
2024/09/27 00:28:31 [DEBUG] GET http://192.168.39.24:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-921129 -n addons-921129
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 logs -n 25: (1.076597658s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-927319                                                                     | download-only-927319 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-517789                                                                     | download-only-517789 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-270882 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | binary-mirror-270882                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42777                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-270882                                                                     | binary-mirror-270882 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| addons  | disable dashboard -p                                                                        | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | addons-921129                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | addons-921129                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-921129 --wait=true                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:18 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:19 UTC | 27 Sep 24 00:19 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | -p addons-921129                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-921129 addons                                                                        | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | addons-921129                                                                               |                      |         |         |                     |                     |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | -p addons-921129                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-921129 ssh curl -s                                                                   | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-921129 ip                                                                            | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-921129 addons                                                                        | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | addons-921129                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-921129 ssh cat                                                                       | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | /opt/local-path-provisioner/pvc-8efc34d2-173d-43c8-a797-ed9149a8a1e5_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-921129 addons                                                                        | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-921129 ip                                                                            | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	| addons  | addons-921129 addons disable                                                                | addons-921129        | jenkins | v1.34.0 | 27 Sep 24 00:28 UTC | 27 Sep 24 00:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:51
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:51.755018   22734 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:51.755122   22734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:51.755130   22734 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:51.755134   22734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:51.755328   22734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:14:51.755917   22734 out.go:352] Setting JSON to false
	I0927 00:14:51.756721   22734 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3443,"bootTime":1727392649,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:51.756817   22734 start.go:139] virtualization: kvm guest
	I0927 00:14:51.759143   22734 out.go:177] * [addons-921129] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:14:51.760572   22734 notify.go:220] Checking for updates...
	I0927 00:14:51.760592   22734 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:14:51.761989   22734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:51.763341   22734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	I0927 00:14:51.764689   22734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:14:51.766049   22734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:14:51.767465   22734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:14:51.769016   22734 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:51.802879   22734 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 00:14:51.804357   22734 start.go:297] selected driver: kvm2
	I0927 00:14:51.804375   22734 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:14:51.804387   22734 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:14:51.805076   22734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:51.805184   22734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:14:51.820630   22734 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:14:51.820708   22734 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:51.820955   22734 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:14:51.820983   22734 cni.go:84] Creating CNI manager for ""
	I0927 00:14:51.821029   22734 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:14:51.821046   22734 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:14:51.821112   22734 start.go:340] cluster config:
	{Name:addons-921129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-921129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:51.821215   22734 iso.go:125] acquiring lock: {Name:mkb5ac60d416b321ea42aa90cf43a9e41df90177 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:51.823079   22734 out.go:177] * Starting "addons-921129" primary control-plane node in "addons-921129" cluster
	I0927 00:14:51.824295   22734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:14:51.824333   22734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0927 00:14:51.824342   22734 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:51.824435   22734 preload.go:172] Found /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0927 00:14:51.824447   22734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0927 00:14:51.824757   22734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/config.json ...
	I0927 00:14:51.824777   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/config.json: {Name:mk487cda02cb945a2fc40a9db1021b731582eb0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:14:51.824918   22734 start.go:360] acquireMachinesLock for addons-921129: {Name:mk790c5a91ac6c252ddc53aabf49b721b83b6e42 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:14:51.824968   22734 start.go:364] duration metric: took 35.808µs to acquireMachinesLock for "addons-921129"
	I0927 00:14:51.824983   22734 start.go:93] Provisioning new machine with config: &{Name:addons-921129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-921129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:14:51.825046   22734 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 00:14:51.826802   22734 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0927 00:14:51.827052   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:14:51.827102   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:14:51.842388   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I0927 00:14:51.843017   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:14:51.843718   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:14:51.843739   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:14:51.844188   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:14:51.844429   22734 main.go:141] libmachine: (addons-921129) Calling .GetMachineName
	I0927 00:14:51.844606   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:14:51.844942   22734 start.go:159] libmachine.API.Create for "addons-921129" (driver="kvm2")
	I0927 00:14:51.844975   22734 client.go:168] LocalClient.Create starting
	I0927 00:14:51.845014   22734 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca.pem
	I0927 00:14:52.145661   22734 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/cert.pem
	I0927 00:14:52.247346   22734 main.go:141] libmachine: Running pre-create checks...
	I0927 00:14:52.247373   22734 main.go:141] libmachine: (addons-921129) Calling .PreCreateCheck
	I0927 00:14:52.247948   22734 main.go:141] libmachine: (addons-921129) Calling .GetConfigRaw
	I0927 00:14:52.248591   22734 main.go:141] libmachine: Creating machine...
	I0927 00:14:52.248609   22734 main.go:141] libmachine: (addons-921129) Calling .Create
	I0927 00:14:52.248886   22734 main.go:141] libmachine: (addons-921129) Creating KVM machine...
	I0927 00:14:52.250115   22734 main.go:141] libmachine: (addons-921129) DBG | found existing default KVM network
	I0927 00:14:52.250881   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:52.250686   22755 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015310}
	I0927 00:14:52.250953   22734 main.go:141] libmachine: (addons-921129) DBG | created network xml: 
	I0927 00:14:52.250973   22734 main.go:141] libmachine: (addons-921129) DBG | <network>
	I0927 00:14:52.250987   22734 main.go:141] libmachine: (addons-921129) DBG |   <name>mk-addons-921129</name>
	I0927 00:14:52.251001   22734 main.go:141] libmachine: (addons-921129) DBG |   <dns enable='no'/>
	I0927 00:14:52.251029   22734 main.go:141] libmachine: (addons-921129) DBG |   
	I0927 00:14:52.251055   22734 main.go:141] libmachine: (addons-921129) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 00:14:52.251102   22734 main.go:141] libmachine: (addons-921129) DBG |     <dhcp>
	I0927 00:14:52.251132   22734 main.go:141] libmachine: (addons-921129) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 00:14:52.251142   22734 main.go:141] libmachine: (addons-921129) DBG |     </dhcp>
	I0927 00:14:52.251149   22734 main.go:141] libmachine: (addons-921129) DBG |   </ip>
	I0927 00:14:52.251158   22734 main.go:141] libmachine: (addons-921129) DBG |   
	I0927 00:14:52.251165   22734 main.go:141] libmachine: (addons-921129) DBG | </network>
	I0927 00:14:52.251175   22734 main.go:141] libmachine: (addons-921129) DBG | 
	I0927 00:14:52.256884   22734 main.go:141] libmachine: (addons-921129) DBG | trying to create private KVM network mk-addons-921129 192.168.39.0/24...
	I0927 00:14:52.327689   22734 main.go:141] libmachine: (addons-921129) DBG | private KVM network mk-addons-921129 192.168.39.0/24 created
	I0927 00:14:52.327733   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:52.327691   22755 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:14:52.327806   22734 main.go:141] libmachine: (addons-921129) Setting up store path in /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129 ...
	I0927 00:14:52.327833   22734 main.go:141] libmachine: (addons-921129) Building disk image from file:///home/jenkins/minikube-integration/19711-14912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:14:52.327856   22734 main.go:141] libmachine: (addons-921129) Downloading /home/jenkins/minikube-integration/19711-14912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19711-14912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 00:14:52.591995   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:52.591863   22755 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa...
	I0927 00:14:52.707559   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:52.707340   22755 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/addons-921129.rawdisk...
	I0927 00:14:52.707594   22734 main.go:141] libmachine: (addons-921129) DBG | Writing magic tar header
	I0927 00:14:52.707606   22734 main.go:141] libmachine: (addons-921129) Setting executable bit set on /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129 (perms=drwx------)
	I0927 00:14:52.707626   22734 main.go:141] libmachine: (addons-921129) Setting executable bit set on /home/jenkins/minikube-integration/19711-14912/.minikube/machines (perms=drwxr-xr-x)
	I0927 00:14:52.707636   22734 main.go:141] libmachine: (addons-921129) Setting executable bit set on /home/jenkins/minikube-integration/19711-14912/.minikube (perms=drwxr-xr-x)
	I0927 00:14:52.707644   22734 main.go:141] libmachine: (addons-921129) DBG | Writing SSH key tar header
	I0927 00:14:52.707653   22734 main.go:141] libmachine: (addons-921129) Setting executable bit set on /home/jenkins/minikube-integration/19711-14912 (perms=drwxrwxr-x)
	I0927 00:14:52.707666   22734 main.go:141] libmachine: (addons-921129) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 00:14:52.707673   22734 main.go:141] libmachine: (addons-921129) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 00:14:52.707681   22734 main.go:141] libmachine: (addons-921129) Creating domain...
	I0927 00:14:52.707694   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:52.707456   22755 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129 ...
	I0927 00:14:52.707705   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129
	I0927 00:14:52.707717   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14912/.minikube/machines
	I0927 00:14:52.707734   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:14:52.707745   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19711-14912
	I0927 00:14:52.707754   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 00:14:52.707761   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home/jenkins
	I0927 00:14:52.707766   22734 main.go:141] libmachine: (addons-921129) DBG | Checking permissions on dir: /home
	I0927 00:14:52.707773   22734 main.go:141] libmachine: (addons-921129) DBG | Skipping /home - not owner
	I0927 00:14:52.709364   22734 main.go:141] libmachine: (addons-921129) define libvirt domain using xml: 
	I0927 00:14:52.709389   22734 main.go:141] libmachine: (addons-921129) <domain type='kvm'>
	I0927 00:14:52.709396   22734 main.go:141] libmachine: (addons-921129)   <name>addons-921129</name>
	I0927 00:14:52.709401   22734 main.go:141] libmachine: (addons-921129)   <memory unit='MiB'>4000</memory>
	I0927 00:14:52.709406   22734 main.go:141] libmachine: (addons-921129)   <vcpu>2</vcpu>
	I0927 00:14:52.709410   22734 main.go:141] libmachine: (addons-921129)   <features>
	I0927 00:14:52.709414   22734 main.go:141] libmachine: (addons-921129)     <acpi/>
	I0927 00:14:52.709429   22734 main.go:141] libmachine: (addons-921129)     <apic/>
	I0927 00:14:52.709436   22734 main.go:141] libmachine: (addons-921129)     <pae/>
	I0927 00:14:52.709445   22734 main.go:141] libmachine: (addons-921129)     
	I0927 00:14:52.709481   22734 main.go:141] libmachine: (addons-921129)   </features>
	I0927 00:14:52.709516   22734 main.go:141] libmachine: (addons-921129)   <cpu mode='host-passthrough'>
	I0927 00:14:52.709528   22734 main.go:141] libmachine: (addons-921129)   
	I0927 00:14:52.709543   22734 main.go:141] libmachine: (addons-921129)   </cpu>
	I0927 00:14:52.709551   22734 main.go:141] libmachine: (addons-921129)   <os>
	I0927 00:14:52.709558   22734 main.go:141] libmachine: (addons-921129)     <type>hvm</type>
	I0927 00:14:52.709569   22734 main.go:141] libmachine: (addons-921129)     <boot dev='cdrom'/>
	I0927 00:14:52.709580   22734 main.go:141] libmachine: (addons-921129)     <boot dev='hd'/>
	I0927 00:14:52.709592   22734 main.go:141] libmachine: (addons-921129)     <bootmenu enable='no'/>
	I0927 00:14:52.709601   22734 main.go:141] libmachine: (addons-921129)   </os>
	I0927 00:14:52.709610   22734 main.go:141] libmachine: (addons-921129)   <devices>
	I0927 00:14:52.709622   22734 main.go:141] libmachine: (addons-921129)     <disk type='file' device='cdrom'>
	I0927 00:14:52.709637   22734 main.go:141] libmachine: (addons-921129)       <source file='/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/boot2docker.iso'/>
	I0927 00:14:52.709664   22734 main.go:141] libmachine: (addons-921129)       <target dev='hdc' bus='scsi'/>
	I0927 00:14:52.709679   22734 main.go:141] libmachine: (addons-921129)       <readonly/>
	I0927 00:14:52.709686   22734 main.go:141] libmachine: (addons-921129)     </disk>
	I0927 00:14:52.709696   22734 main.go:141] libmachine: (addons-921129)     <disk type='file' device='disk'>
	I0927 00:14:52.709709   22734 main.go:141] libmachine: (addons-921129)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 00:14:52.709723   22734 main.go:141] libmachine: (addons-921129)       <source file='/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/addons-921129.rawdisk'/>
	I0927 00:14:52.709733   22734 main.go:141] libmachine: (addons-921129)       <target dev='hda' bus='virtio'/>
	I0927 00:14:52.709746   22734 main.go:141] libmachine: (addons-921129)     </disk>
	I0927 00:14:52.709762   22734 main.go:141] libmachine: (addons-921129)     <interface type='network'>
	I0927 00:14:52.709775   22734 main.go:141] libmachine: (addons-921129)       <source network='mk-addons-921129'/>
	I0927 00:14:52.709785   22734 main.go:141] libmachine: (addons-921129)       <model type='virtio'/>
	I0927 00:14:52.709808   22734 main.go:141] libmachine: (addons-921129)     </interface>
	I0927 00:14:52.709815   22734 main.go:141] libmachine: (addons-921129)     <interface type='network'>
	I0927 00:14:52.709825   22734 main.go:141] libmachine: (addons-921129)       <source network='default'/>
	I0927 00:14:52.709835   22734 main.go:141] libmachine: (addons-921129)       <model type='virtio'/>
	I0927 00:14:52.709854   22734 main.go:141] libmachine: (addons-921129)     </interface>
	I0927 00:14:52.709875   22734 main.go:141] libmachine: (addons-921129)     <serial type='pty'>
	I0927 00:14:52.709887   22734 main.go:141] libmachine: (addons-921129)       <target port='0'/>
	I0927 00:14:52.709896   22734 main.go:141] libmachine: (addons-921129)     </serial>
	I0927 00:14:52.709924   22734 main.go:141] libmachine: (addons-921129)     <console type='pty'>
	I0927 00:14:52.709952   22734 main.go:141] libmachine: (addons-921129)       <target type='serial' port='0'/>
	I0927 00:14:52.709960   22734 main.go:141] libmachine: (addons-921129)     </console>
	I0927 00:14:52.709969   22734 main.go:141] libmachine: (addons-921129)     <rng model='virtio'>
	I0927 00:14:52.709975   22734 main.go:141] libmachine: (addons-921129)       <backend model='random'>/dev/random</backend>
	I0927 00:14:52.709985   22734 main.go:141] libmachine: (addons-921129)     </rng>
	I0927 00:14:52.709990   22734 main.go:141] libmachine: (addons-921129)     
	I0927 00:14:52.709997   22734 main.go:141] libmachine: (addons-921129)     
	I0927 00:14:52.710001   22734 main.go:141] libmachine: (addons-921129)   </devices>
	I0927 00:14:52.710007   22734 main.go:141] libmachine: (addons-921129) </domain>
	I0927 00:14:52.710018   22734 main.go:141] libmachine: (addons-921129) 
	I0927 00:14:52.716211   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:76:91:46 in network default
	I0927 00:14:52.716760   22734 main.go:141] libmachine: (addons-921129) Ensuring networks are active...
	I0927 00:14:52.716781   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:52.717482   22734 main.go:141] libmachine: (addons-921129) Ensuring network default is active
	I0927 00:14:52.717858   22734 main.go:141] libmachine: (addons-921129) Ensuring network mk-addons-921129 is active
	I0927 00:14:52.718460   22734 main.go:141] libmachine: (addons-921129) Getting domain xml...
	I0927 00:14:52.719286   22734 main.go:141] libmachine: (addons-921129) Creating domain...
	I0927 00:14:54.202831   22734 main.go:141] libmachine: (addons-921129) Waiting to get IP...
	I0927 00:14:54.203890   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:54.204502   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:54.204607   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:54.204491   22755 retry.go:31] will retry after 198.553934ms: waiting for machine to come up
	I0927 00:14:54.405281   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:54.405822   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:54.405843   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:54.405765   22755 retry.go:31] will retry after 376.579371ms: waiting for machine to come up
	I0927 00:14:54.784350   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:54.784795   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:54.784817   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:54.784748   22755 retry.go:31] will retry after 460.641428ms: waiting for machine to come up
	I0927 00:14:55.247749   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:55.248295   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:55.248331   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:55.248239   22755 retry.go:31] will retry after 480.558888ms: waiting for machine to come up
	I0927 00:14:55.729872   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:55.730338   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:55.730366   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:55.730282   22755 retry.go:31] will retry after 670.736004ms: waiting for machine to come up
	I0927 00:14:56.402252   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:56.402654   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:56.402683   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:56.402619   22755 retry.go:31] will retry after 887.522821ms: waiting for machine to come up
	I0927 00:14:57.291404   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:57.291826   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:57.291859   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:57.291784   22755 retry.go:31] will retry after 747.391559ms: waiting for machine to come up
	I0927 00:14:58.040966   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:58.041499   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:58.041527   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:58.041444   22755 retry.go:31] will retry after 1.294269229s: waiting for machine to come up
	I0927 00:14:59.338871   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:14:59.339240   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:14:59.339262   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:14:59.339210   22755 retry.go:31] will retry after 1.659420758s: waiting for machine to come up
	I0927 00:15:01.001023   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:01.001602   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:15:01.001626   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:15:01.001547   22755 retry.go:31] will retry after 1.690089845s: waiting for machine to come up
	I0927 00:15:02.693725   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:02.694228   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:15:02.694252   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:15:02.694131   22755 retry.go:31] will retry after 2.152424317s: waiting for machine to come up
	I0927 00:15:04.848276   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:04.848730   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:15:04.848745   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:15:04.848701   22755 retry.go:31] will retry after 2.591399872s: waiting for machine to come up
	I0927 00:15:07.442401   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:07.442857   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:15:07.442875   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:15:07.442792   22755 retry.go:31] will retry after 3.939010067s: waiting for machine to come up
	I0927 00:15:11.386024   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:11.386672   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find current IP address of domain addons-921129 in network mk-addons-921129
	I0927 00:15:11.386696   22734 main.go:141] libmachine: (addons-921129) DBG | I0927 00:15:11.386622   22755 retry.go:31] will retry after 5.039868651s: waiting for machine to come up
	I0927 00:15:16.432343   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.432987   22734 main.go:141] libmachine: (addons-921129) Found IP for machine: 192.168.39.24
	I0927 00:15:16.433005   22734 main.go:141] libmachine: (addons-921129) Reserving static IP address...
	I0927 00:15:16.433081   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has current primary IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.433506   22734 main.go:141] libmachine: (addons-921129) DBG | unable to find host DHCP lease matching {name: "addons-921129", mac: "52:54:00:f6:1f:55", ip: "192.168.39.24"} in network mk-addons-921129
	I0927 00:15:16.665834   22734 main.go:141] libmachine: (addons-921129) DBG | Getting to WaitForSSH function...
	I0927 00:15:16.665927   22734 main.go:141] libmachine: (addons-921129) Reserved static IP address: 192.168.39.24
	I0927 00:15:16.665955   22734 main.go:141] libmachine: (addons-921129) Waiting for SSH to be available...
	I0927 00:15:16.668764   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.669405   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:16.669438   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.669648   22734 main.go:141] libmachine: (addons-921129) DBG | Using SSH client type: external
	I0927 00:15:16.669679   22734 main.go:141] libmachine: (addons-921129) DBG | Using SSH private key: /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa (-rw-------)
	I0927 00:15:16.669731   22734 main.go:141] libmachine: (addons-921129) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 00:15:16.669754   22734 main.go:141] libmachine: (addons-921129) DBG | About to run SSH command:
	I0927 00:15:16.669766   22734 main.go:141] libmachine: (addons-921129) DBG | exit 0
	I0927 00:15:16.807344   22734 main.go:141] libmachine: (addons-921129) DBG | SSH cmd err, output: <nil>: 
	I0927 00:15:16.807671   22734 main.go:141] libmachine: (addons-921129) KVM machine creation complete!
	I0927 00:15:16.807974   22734 main.go:141] libmachine: (addons-921129) Calling .GetConfigRaw
	I0927 00:15:16.808659   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:16.808864   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:16.809042   22734 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 00:15:16.809055   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:16.810266   22734 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 00:15:16.810279   22734 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 00:15:16.810284   22734 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 00:15:16.810290   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:16.813083   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.813493   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:16.813522   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.813668   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:16.813868   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:16.814027   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:16.814155   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:16.814309   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:16.814554   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:16.814566   22734 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 00:15:16.926361   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:15:16.926383   22734 main.go:141] libmachine: Detecting the provisioner...
	I0927 00:15:16.926390   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:16.929999   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.930348   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:16.930379   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:16.930620   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:16.930884   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:16.931076   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:16.931273   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:16.931492   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:16.931674   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:16.931686   22734 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 00:15:17.047708   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 00:15:17.047776   22734 main.go:141] libmachine: found compatible host: buildroot
	I0927 00:15:17.047787   22734 main.go:141] libmachine: Provisioning with buildroot...
	I0927 00:15:17.047796   22734 main.go:141] libmachine: (addons-921129) Calling .GetMachineName
	I0927 00:15:17.048055   22734 buildroot.go:166] provisioning hostname "addons-921129"
	I0927 00:15:17.048081   22734 main.go:141] libmachine: (addons-921129) Calling .GetMachineName
	I0927 00:15:17.048275   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.051373   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.051742   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.051774   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.051943   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:17.052172   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.052345   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.052453   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:17.052615   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:17.052790   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:17.052802   22734 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-921129 && echo "addons-921129" | sudo tee /etc/hostname
	I0927 00:15:17.181590   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-921129
	
	I0927 00:15:17.181620   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.184829   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.185179   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.185205   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.185629   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:17.185847   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.186035   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.186178   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:17.186334   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:17.186604   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:17.186633   22734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-921129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-921129/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-921129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:15:17.308104   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:15:17.308129   22734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19711-14912/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-14912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-14912/.minikube}
	I0927 00:15:17.308169   22734 buildroot.go:174] setting up certificates
	I0927 00:15:17.308180   22734 provision.go:84] configureAuth start
	I0927 00:15:17.308188   22734 main.go:141] libmachine: (addons-921129) Calling .GetMachineName
	I0927 00:15:17.308552   22734 main.go:141] libmachine: (addons-921129) Calling .GetIP
	I0927 00:15:17.312107   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.312663   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.312715   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.312882   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.316287   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.316725   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.316758   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.317000   22734 provision.go:143] copyHostCerts
	I0927 00:15:17.317087   22734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-14912/.minikube/key.pem (1679 bytes)
	I0927 00:15:17.317207   22734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-14912/.minikube/ca.pem (1078 bytes)
	I0927 00:15:17.317274   22734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-14912/.minikube/cert.pem (1123 bytes)
	I0927 00:15:17.317323   22734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-14912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca-key.pem org=jenkins.addons-921129 san=[127.0.0.1 192.168.39.24 addons-921129 localhost minikube]
	I0927 00:15:17.475569   22734 provision.go:177] copyRemoteCerts
	I0927 00:15:17.475641   22734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:15:17.475664   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.479813   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.480520   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.480556   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.480871   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:17.481214   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.481523   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:17.481817   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:17.570001   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:15:17.597065   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:15:17.623193   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 00:15:17.648518   22734 provision.go:87] duration metric: took 340.324786ms to configureAuth
	I0927 00:15:17.648552   22734 buildroot.go:189] setting minikube options for container-runtime
	I0927 00:15:17.648849   22734 config.go:182] Loaded profile config "addons-921129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:17.648888   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:17.649344   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.653039   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.653544   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.653593   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.653823   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:17.654059   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.654259   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.654418   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:17.654617   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:17.654834   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:17.654849   22734 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0927 00:15:17.773316   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0927 00:15:17.773345   22734 buildroot.go:70] root file system type: tmpfs
	I0927 00:15:17.773482   22734 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0927 00:15:17.773509   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.777013   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.777395   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.777420   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.777654   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:17.777862   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.778025   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.778185   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:17.778376   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:17.778532   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:17.778593   22734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0927 00:15:17.906394   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0927 00:15:17.906437   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:17.909337   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.909721   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:17.909749   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:17.909941   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:17.910149   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.910313   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:17.910464   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:17.910633   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:17.910829   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:17.910845   22734 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0927 00:15:19.738521   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0927 00:15:19.738546   22734 main.go:141] libmachine: Checking connection to Docker...
	I0927 00:15:19.738558   22734 main.go:141] libmachine: (addons-921129) Calling .GetURL
	I0927 00:15:19.739828   22734 main.go:141] libmachine: (addons-921129) DBG | Using libvirt version 6000000
	I0927 00:15:19.742118   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.742426   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.742444   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.742613   22734 main.go:141] libmachine: Docker is up and running!
	I0927 00:15:19.742625   22734 main.go:141] libmachine: Reticulating splines...
	I0927 00:15:19.742634   22734 client.go:171] duration metric: took 27.897651405s to LocalClient.Create
	I0927 00:15:19.742661   22734 start.go:167] duration metric: took 27.897720502s to libmachine.API.Create "addons-921129"
	I0927 00:15:19.742674   22734 start.go:293] postStartSetup for "addons-921129" (driver="kvm2")
	I0927 00:15:19.742688   22734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:15:19.742712   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:19.742965   22734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:15:19.742999   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:19.745110   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.745376   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.745397   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.745528   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:19.745689   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:19.745837   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:19.745981   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:19.833579   22734 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:15:19.838134   22734 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 00:15:19.838158   22734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14912/.minikube/addons for local assets ...
	I0927 00:15:19.838234   22734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-14912/.minikube/files for local assets ...
	I0927 00:15:19.838256   22734 start.go:296] duration metric: took 95.576332ms for postStartSetup
	I0927 00:15:19.838291   22734 main.go:141] libmachine: (addons-921129) Calling .GetConfigRaw
	I0927 00:15:19.838927   22734 main.go:141] libmachine: (addons-921129) Calling .GetIP
	I0927 00:15:19.841733   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.842369   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.842407   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.842745   22734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/config.json ...
	I0927 00:15:19.843021   22734 start.go:128] duration metric: took 28.017963939s to createHost
	I0927 00:15:19.843049   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:19.845679   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.846080   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.846108   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.846303   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:19.846503   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:19.846633   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:19.846803   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:19.846982   22734 main.go:141] libmachine: Using SSH client type: native
	I0927 00:15:19.847151   22734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0927 00:15:19.847161   22734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 00:15:19.959667   22734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727396119.940287262
	
	I0927 00:15:19.959687   22734 fix.go:216] guest clock: 1727396119.940287262
	I0927 00:15:19.959693   22734 fix.go:229] Guest: 2024-09-27 00:15:19.940287262 +0000 UTC Remote: 2024-09-27 00:15:19.843037017 +0000 UTC m=+28.123837435 (delta=97.250245ms)
	I0927 00:15:19.959714   22734 fix.go:200] guest clock delta is within tolerance: 97.250245ms
	I0927 00:15:19.959719   22734 start.go:83] releasing machines lock for "addons-921129", held for 28.13474452s
	I0927 00:15:19.959739   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:19.960020   22734 main.go:141] libmachine: (addons-921129) Calling .GetIP
	I0927 00:15:19.962920   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.963311   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.963343   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.963485   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:19.964175   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:19.964421   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:19.964526   22734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:15:19.964579   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:19.964640   22734 ssh_runner.go:195] Run: cat /version.json
	I0927 00:15:19.964678   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:19.967367   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.967522   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.967704   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.967728   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.967878   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:19.967982   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:19.968014   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:19.968083   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:19.968208   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:19.968305   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:19.968365   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:19.968488   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:19.968485   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:19.968611   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:20.085637   22734 ssh_runner.go:195] Run: systemctl --version
	I0927 00:15:20.091838   22734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 00:15:20.098130   22734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:15:20.098206   22734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:15:20.118163   22734 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 00:15:20.118192   22734 start.go:495] detecting cgroup driver to use...
	I0927 00:15:20.118309   22734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:20.138901   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 00:15:20.151183   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 00:15:20.163717   22734 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 00:15:20.163798   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 00:15:20.176755   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:20.189493   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 00:15:20.201454   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:20.213199   22734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:15:20.224644   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 00:15:20.236122   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 00:15:20.248543   22734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 00:15:20.261556   22734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:15:20.273155   22734 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 00:15:20.273216   22734 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 00:15:20.286091   22734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:15:20.297915   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:20.431478   22734 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 00:15:20.460476   22734 start.go:495] detecting cgroup driver to use...
	I0927 00:15:20.460560   22734 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0927 00:15:20.481844   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:15:20.497821   22734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:15:20.519778   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:15:20.534942   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 00:15:20.549466   22734 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 00:15:20.582375   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 00:15:20.597172   22734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:20.622046   22734 ssh_runner.go:195] Run: which cri-dockerd
	I0927 00:15:20.627297   22734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 00:15:20.637393   22734 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 00:15:20.655376   22734 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0927 00:15:20.769881   22734 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0927 00:15:20.888909   22734 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 00:15:20.889016   22734 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0927 00:15:20.907226   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:21.027880   22734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 00:15:23.755555   22734 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.727635029s)
	I0927 00:15:23.755624   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 00:15:23.770463   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:23.784952   22734 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0927 00:15:23.904455   22734 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0927 00:15:24.032100   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:24.160553   22734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0927 00:15:24.178801   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:24.192969   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:24.305499   22734 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0927 00:15:24.384771   22734 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 00:15:24.384875   22734 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0927 00:15:24.390606   22734 start.go:563] Will wait 60s for crictl version
	I0927 00:15:24.390673   22734 ssh_runner.go:195] Run: which crictl
	I0927 00:15:24.395101   22734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:15:24.441652   22734 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 00:15:24.441723   22734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 00:15:24.466532   22734 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0927 00:15:24.491848   22734 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 00:15:24.491905   22734 main.go:141] libmachine: (addons-921129) Calling .GetIP
	I0927 00:15:24.494718   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:24.495149   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:24.495180   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:24.495438   22734 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 00:15:24.499536   22734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:15:24.512224   22734 kubeadm.go:883] updating cluster {Name:addons-921129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-921129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:15:24.512368   22734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:15:24.512422   22734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 00:15:24.528241   22734 docker.go:685] Got preloaded images: 
	I0927 00:15:24.528263   22734 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0927 00:15:24.528313   22734 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 00:15:24.538117   22734 ssh_runner.go:195] Run: which lz4
	I0927 00:15:24.542142   22734 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 00:15:24.546365   22734 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 00:15:24.546411   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0927 00:15:25.744349   22734 docker.go:649] duration metric: took 1.202234906s to copy over tarball
	I0927 00:15:25.744415   22734 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 00:15:27.744130   22734 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.999687175s)
	I0927 00:15:27.744159   22734 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 00:15:27.779732   22734 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0927 00:15:27.790628   22734 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0927 00:15:27.808249   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:27.922667   22734 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0927 00:15:31.156255   22734 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.233538846s)
	I0927 00:15:31.156341   22734 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0927 00:15:31.174295   22734 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0927 00:15:31.174329   22734 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:15:31.174342   22734 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.31.1 docker true true} ...
	I0927 00:15:31.174475   22734 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-921129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-921129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:15:31.174555   22734 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0927 00:15:31.224216   22734 cni.go:84] Creating CNI manager for ""
	I0927 00:15:31.224249   22734 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:31.224260   22734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:15:31.224279   22734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-921129 NodeName:addons-921129 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:15:31.224409   22734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-921129"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:15:31.224464   22734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:15:31.234680   22734 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:15:31.234747   22734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:15:31.245261   22734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 00:15:31.262977   22734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:15:31.280185   22734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0927 00:15:31.297732   22734 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0927 00:15:31.301840   22734 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:15:31.314543   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:31.436954   22734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:15:31.460886   22734 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129 for IP: 192.168.39.24
	I0927 00:15:31.460912   22734 certs.go:194] generating shared ca certs ...
	I0927 00:15:31.460928   22734 certs.go:226] acquiring lock for ca certs: {Name:mk7dc6f3af73e66427d5b4c4b097073d5ddbcee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.461155   22734 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-14912/.minikube/ca.key
	I0927 00:15:31.725027   22734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14912/.minikube/ca.crt ...
	I0927 00:15:31.725059   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/ca.crt: {Name:mk8785ec27543b690910a1d48755d6686b4cad78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.725235   22734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14912/.minikube/ca.key ...
	I0927 00:15:31.725246   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/ca.key: {Name:mk6937b09595da97a95aad00b44dbed1a985b562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.725329   22734 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.key
	I0927 00:15:31.794460   22734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.crt ...
	I0927 00:15:31.794493   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.crt: {Name:mk3d0687594f8b418a21d62e364c3018e7482c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.794661   22734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.key ...
	I0927 00:15:31.794672   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.key: {Name:mk128b5bde50a8ee73347c6d15d0cc078e6f2004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.794739   22734 certs.go:256] generating profile certs ...
	I0927 00:15:31.794793   22734 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.key
	I0927 00:15:31.794833   22734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt with IP's: []
	I0927 00:15:31.876198   22734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt ...
	I0927 00:15:31.876232   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: {Name:mkab6db2e12dc6774e1bd74f63e87381ef4bb6ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.876403   22734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.key ...
	I0927 00:15:31.876415   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.key: {Name:mk4c72383eb83021c691c68e64ab44ab586fd8f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.876482   22734 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.key.1c7472b3
	I0927 00:15:31.876500   22734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.crt.1c7472b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
	I0927 00:15:31.932725   22734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.crt.1c7472b3 ...
	I0927 00:15:31.932752   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.crt.1c7472b3: {Name:mkf76f3cdb89b1d5b27915f973a7239c7aab6d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.932908   22734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.key.1c7472b3 ...
	I0927 00:15:31.932922   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.key.1c7472b3: {Name:mk7ee397a966cb55b1aeb3fd2872ab8b52c00cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.932992   22734 certs.go:381] copying /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.crt.1c7472b3 -> /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.crt
	I0927 00:15:31.933065   22734 certs.go:385] copying /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.key.1c7472b3 -> /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.key
	I0927 00:15:31.933109   22734 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.key
	I0927 00:15:31.933125   22734 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.crt with IP's: []
	I0927 00:15:31.995658   22734 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.crt ...
	I0927 00:15:31.995689   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.crt: {Name:mk6e0efc2454af14a12090cc1c7a19f473998bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.995847   22734 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.key ...
	I0927 00:15:31.995857   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.key: {Name:mkccb110ec31c738774080d02abb4bdc766c7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:31.996024   22734 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 00:15:31.996056   22734 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:15:31.996089   22734 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:15:31.996112   22734 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-14912/.minikube/certs/key.pem (1679 bytes)
	I0927 00:15:31.996653   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:15:32.021273   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 00:15:32.045892   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:15:32.071368   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:15:32.095780   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:15:32.121185   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:15:32.151113   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:15:32.184901   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:15:32.209825   22734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-14912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:15:32.234751   22734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:15:32.252124   22734 ssh_runner.go:195] Run: openssl version
	I0927 00:15:32.258290   22734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:15:32.270017   22734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:32.274937   22734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:15 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:32.274995   22734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:32.281428   22734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:15:32.293557   22734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:15:32.297919   22734 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:15:32.297972   22734 kubeadm.go:392] StartCluster: {Name:addons-921129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-921129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:32.298111   22734 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 00:15:32.314551   22734 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:15:32.325165   22734 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:15:32.335928   22734 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:15:32.346122   22734 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:15:32.346144   22734 kubeadm.go:157] found existing configuration files:
	
	I0927 00:15:32.346185   22734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:15:32.356036   22734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:15:32.356091   22734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:15:32.366721   22734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:15:32.376387   22734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:15:32.376452   22734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:15:32.386583   22734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:15:32.396286   22734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:15:32.396344   22734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:15:32.406557   22734 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:15:32.415799   22734 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:15:32.415873   22734 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:15:32.426101   22734 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:15:32.473951   22734 kubeadm.go:310] W0927 00:15:32.455441    1509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:32.480651   22734 kubeadm.go:310] W0927 00:15:32.462281    1509 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:32.586510   22734 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:15:42.599724   22734 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:15:42.599795   22734 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:15:42.599893   22734 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:15:42.600010   22734 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:15:42.600147   22734 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:15:42.600265   22734 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:15:42.602198   22734 out.go:235]   - Generating certificates and keys ...
	I0927 00:15:42.602281   22734 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:15:42.602332   22734 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:15:42.602414   22734 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:15:42.602481   22734 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:15:42.602528   22734 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:15:42.602573   22734 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:15:42.602639   22734 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:15:42.602787   22734 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-921129 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
	I0927 00:15:42.602884   22734 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:15:42.603002   22734 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-921129 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
	I0927 00:15:42.603065   22734 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:15:42.603138   22734 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:15:42.603176   22734 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:15:42.603235   22734 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:15:42.603295   22734 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:15:42.603345   22734 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:15:42.603398   22734 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:15:42.603477   22734 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:15:42.603551   22734 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:15:42.603630   22734 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:15:42.603706   22734 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:15:42.605131   22734 out.go:235]   - Booting up control plane ...
	I0927 00:15:42.605235   22734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:15:42.605328   22734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:15:42.605423   22734 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:15:42.605537   22734 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:15:42.605638   22734 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:15:42.605676   22734 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:15:42.605804   22734 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:15:42.605932   22734 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:15:42.605997   22734 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 522.399114ms
	I0927 00:15:42.606082   22734 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:15:42.606133   22734 kubeadm.go:310] [api-check] The API server is healthy after 5.501530404s
	I0927 00:15:42.606222   22734 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:15:42.606317   22734 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:15:42.606364   22734 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:15:42.606547   22734 kubeadm.go:310] [mark-control-plane] Marking the node addons-921129 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:15:42.606610   22734 kubeadm.go:310] [bootstrap-token] Using token: leky2l.ztc6mair50p3o6ds
	I0927 00:15:42.608006   22734 out.go:235]   - Configuring RBAC rules ...
	I0927 00:15:42.608109   22734 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:15:42.608198   22734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:15:42.608334   22734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:15:42.608441   22734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:15:42.608553   22734 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:15:42.608655   22734 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:15:42.608757   22734 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:15:42.608800   22734 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:15:42.608841   22734 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:15:42.608848   22734 kubeadm.go:310] 
	I0927 00:15:42.608904   22734 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:15:42.608910   22734 kubeadm.go:310] 
	I0927 00:15:42.608984   22734 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:15:42.608991   22734 kubeadm.go:310] 
	I0927 00:15:42.609013   22734 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:15:42.609080   22734 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:15:42.609125   22734 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:15:42.609132   22734 kubeadm.go:310] 
	I0927 00:15:42.609182   22734 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:15:42.609188   22734 kubeadm.go:310] 
	I0927 00:15:42.609227   22734 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:15:42.609237   22734 kubeadm.go:310] 
	I0927 00:15:42.609293   22734 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:15:42.609358   22734 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:15:42.609454   22734 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:15:42.609466   22734 kubeadm.go:310] 
	I0927 00:15:42.609540   22734 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:15:42.609658   22734 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:15:42.609668   22734 kubeadm.go:310] 
	I0927 00:15:42.609778   22734 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token leky2l.ztc6mair50p3o6ds \
	I0927 00:15:42.609933   22734 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b00e177ee4ba99eb87fc07a2d6b0a097e4d3eb5069a95678e980628330ec03f1 \
	I0927 00:15:42.609971   22734 kubeadm.go:310] 	--control-plane 
	I0927 00:15:42.609980   22734 kubeadm.go:310] 
	I0927 00:15:42.610081   22734 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:15:42.610089   22734 kubeadm.go:310] 
	I0927 00:15:42.610170   22734 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token leky2l.ztc6mair50p3o6ds \
	I0927 00:15:42.610303   22734 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b00e177ee4ba99eb87fc07a2d6b0a097e4d3eb5069a95678e980628330ec03f1 
	I0927 00:15:42.610315   22734 cni.go:84] Creating CNI manager for ""
	I0927 00:15:42.610327   22734 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:42.612473   22734 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:15:42.613561   22734 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:15:42.624413   22734 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:15:42.646561   22734 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:15:42.646643   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:42.646670   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-921129 minikube.k8s.io/updated_at=2024_09_27T00_15_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-921129 minikube.k8s.io/primary=true
	I0927 00:15:42.676514   22734 ops.go:34] apiserver oom_adj: -16
	I0927 00:15:42.802525   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:43.303003   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:43.802885   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:44.303650   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:44.803100   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:45.302680   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:45.803375   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:46.302615   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:46.803431   22734 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:46.909274   22734 kubeadm.go:1113] duration metric: took 4.262675956s to wait for elevateKubeSystemPrivileges
	I0927 00:15:46.909331   22734 kubeadm.go:394] duration metric: took 14.611361828s to StartCluster
	I0927 00:15:46.909361   22734 settings.go:142] acquiring lock: {Name:mk9b752d831af11b021110afb1c6a682e0073dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:46.909540   22734 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-14912/kubeconfig
	I0927 00:15:46.910058   22734 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-14912/kubeconfig: {Name:mk7b3553d46a3d3c9d2333c30b71c7bb7a230ff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:46.910328   22734 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:15:46.910337   22734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:15:46.910375   22734 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:15:46.910511   22734 addons.go:69] Setting yakd=true in profile "addons-921129"
	I0927 00:15:46.910533   22734 addons.go:234] Setting addon yakd=true in "addons-921129"
	I0927 00:15:46.910546   22734 addons.go:69] Setting metrics-server=true in profile "addons-921129"
	I0927 00:15:46.910545   22734 addons.go:69] Setting inspektor-gadget=true in profile "addons-921129"
	I0927 00:15:46.910569   22734 addons.go:69] Setting registry=true in profile "addons-921129"
	I0927 00:15:46.910575   22734 addons.go:234] Setting addon inspektor-gadget=true in "addons-921129"
	I0927 00:15:46.910579   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.910584   22734 addons.go:234] Setting addon registry=true in "addons-921129"
	I0927 00:15:46.910590   22734 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-921129"
	I0927 00:15:46.910620   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.910624   22734 addons.go:69] Setting cloud-spanner=true in profile "addons-921129"
	I0927 00:15:46.910693   22734 addons.go:234] Setting addon cloud-spanner=true in "addons-921129"
	I0927 00:15:46.910722   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.910562   22734 addons.go:234] Setting addon metrics-server=true in "addons-921129"
	I0927 00:15:46.910809   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.911035   22734 config.go:182] Loaded profile config "addons-921129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:46.910589   22734 addons.go:69] Setting default-storageclass=true in profile "addons-921129"
	I0927 00:15:46.911159   22734 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-921129"
	I0927 00:15:46.911163   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.911177   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.911204   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.910630   22734 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-921129"
	I0927 00:15:46.910620   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.911262   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.911294   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.911317   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.911410   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.910630   22734 addons.go:69] Setting gcp-auth=true in profile "addons-921129"
	I0927 00:15:46.911452   22734 mustload.go:65] Loading cluster: addons-921129
	I0927 00:15:46.911455   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.910639   22734 addons.go:69] Setting ingress=true in profile "addons-921129"
	I0927 00:15:46.911536   22734 addons.go:234] Setting addon ingress=true in "addons-921129"
	I0927 00:15:46.910641   22734 addons.go:69] Setting volcano=true in profile "addons-921129"
	I0927 00:15:46.911574   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.911652   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.911667   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.911684   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.911710   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.911936   22734 config.go:182] Loaded profile config "addons-921129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:46.911574   22734 addons.go:234] Setting addon volcano=true in "addons-921129"
	I0927 00:15:46.912063   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.912089   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.912284   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.912331   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.912474   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.912514   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.912564   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.912601   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.912748   22734 out.go:177] * Verifying Kubernetes components...
	I0927 00:15:46.910649   22734 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-921129"
	I0927 00:15:46.912892   22734 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-921129"
	I0927 00:15:46.910659   22734 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-921129"
	I0927 00:15:46.913138   22734 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-921129"
	I0927 00:15:46.913188   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.913279   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.913313   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.910535   22734 addons.go:69] Setting storage-provisioner=true in profile "addons-921129"
	I0927 00:15:46.913536   22734 addons.go:234] Setting addon storage-provisioner=true in "addons-921129"
	I0927 00:15:46.913589   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.913732   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.913811   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.910658   22734 addons.go:69] Setting volumesnapshots=true in profile "addons-921129"
	I0927 00:15:46.914092   22734 addons.go:234] Setting addon volumesnapshots=true in "addons-921129"
	I0927 00:15:46.914133   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.910647   22734 addons.go:69] Setting ingress-dns=true in profile "addons-921129"
	I0927 00:15:46.914338   22734 addons.go:234] Setting addon ingress-dns=true in "addons-921129"
	I0927 00:15:46.914382   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.911993   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.914623   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.914994   22734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:15:46.934929   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0927 00:15:46.934957   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0927 00:15:46.935196   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0927 00:15:46.935488   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0927 00:15:46.935972   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.936033   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.936096   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.936111   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.936504   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.936525   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.936545   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.936556   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.936601   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.936612   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.936641   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.936622   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.936858   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.936908   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.936991   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:46.937138   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:46.937304   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.937596   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.937861   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.937902   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.938092   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.938128   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.938957   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41583
	I0927 00:15:46.939516   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.943509   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.943560   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.944330   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.944371   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.944858   22734 addons.go:234] Setting addon default-storageclass=true in "addons-921129"
	I0927 00:15:46.944975   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:46.945114   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.945160   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.945714   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.945811   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.945844   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.945888   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.947657   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.948567   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.948597   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.949018   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.949799   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.949848   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.958143   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42369
	I0927 00:15:46.963836   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.964467   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.964496   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.965054   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.965648   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.965697   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.983100   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0927 00:15:46.983637   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.984132   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.984154   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.984536   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.985659   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.985703   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.988543   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0927 00:15:46.988562   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I0927 00:15:46.988689   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0927 00:15:46.988883   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0927 00:15:46.989038   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0927 00:15:46.989219   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.989395   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.990294   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.990310   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.990479   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.990497   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.990508   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.990538   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.991026   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.991193   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.991208   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.991365   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.991829   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.991864   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.992223   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.992240   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.992304   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.992951   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:46.992986   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:46.993217   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:46.994028   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.994611   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.994628   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.995033   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.995201   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:46.996106   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:46.996484   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:46.996550   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:46.996681   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0927 00:15:46.997681   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.998077   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34935
	I0927 00:15:46.999032   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:46.999121   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:46.999149   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:46.999232   22734 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:15:46.999991   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.000025   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.000438   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.000735   22734 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:15:47.000759   22734 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:15:47.000792   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.001019   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.001067   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.001462   22734 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-921129"
	I0927 00:15:47.001523   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:47.002315   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.002431   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.002488   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.002548   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.005013   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.005062   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.005130   22734 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:15:47.005379   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0927 00:15:47.005956   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.006053   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.006587   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.006608   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.006682   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.006691   22734 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:15:47.006698   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.006708   22734 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:15:47.006730   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.006735   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.007178   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.007239   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.007960   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.007999   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.008219   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.008442   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.010580   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.011111   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.011137   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.011539   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.011822   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.011998   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.012133   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.012884   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I0927 00:15:47.013311   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0927 00:15:47.013477   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.013998   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.014285   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.014301   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.015615   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.015644   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.016067   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.016300   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.016363   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.016512   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.017419   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36067
	I0927 00:15:47.018467   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.018536   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.019458   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.019477   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.019867   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.020587   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.020627   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.020881   22734 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:15:47.023211   22734 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:15:47.024648   22734 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:15:47.024668   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:15:47.024693   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.028375   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.029082   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.029119   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.029354   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.029520   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.029656   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.029787   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.031380   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0927 00:15:47.032076   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.032728   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.032748   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.033110   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.033273   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.035503   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.038019   22734 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:15:47.038975   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0927 00:15:47.039508   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.040661   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44191
	I0927 00:15:47.040738   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.040757   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.041032   22734 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:15:47.041047   22734 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:15:47.041070   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.041308   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.041374   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.041967   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.042007   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.042363   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.042379   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.046501   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0927 00:15:47.047235   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.047633   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.047976   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.048123   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.048144   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.048889   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38689
	I0927 00:15:47.048904   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.049137   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.049622   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.050197   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.050229   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.050791   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.051147   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.051905   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.052420   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.052438   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.052870   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.053292   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.053461   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.053932   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.054240   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.055235   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.056185   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0927 00:15:47.056409   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36589
	I0927 00:15:47.056546   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:15:47.056616   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:15:47.056680   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.056765   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.057304   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.057365   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.057633   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.058147   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.058149   22734 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:15:47.058161   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.058164   22734 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:15:47.058183   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.058238   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.058618   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.059217   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.059260   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.059351   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:15:47.059497   22734 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:15:47.060310   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:47.060339   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:47.060415   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0927 00:15:47.060882   22734 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:47.060917   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:15:47.060937   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.061034   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.061679   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.061696   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.061807   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.062301   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.062363   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.062375   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.062569   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.062726   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.062894   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.063011   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:15:47.063307   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.063596   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0927 00:15:47.064311   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.064409   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.070534   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.070566   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.071222   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.071488   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.072741   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:15:47.073545   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.073883   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.074267   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.074782   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.074999   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.075034   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.075229   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.075378   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.075528   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.075728   22734 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:15:47.075815   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:15:47.075841   22734 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:15:47.078983   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
	I0927 00:15:47.080574   22734 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:15:47.080593   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:15:47.080615   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.081584   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.082178   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.082197   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.082449   22734 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:47.082469   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:15:47.082487   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.082709   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.082924   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.083122   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0927 00:15:47.083939   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.084065   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I0927 00:15:47.084481   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:15:47.084530   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.084549   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.084570   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.085097   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.085311   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.085417   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.085447   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.085705   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0927 00:15:47.085809   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.086137   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.086184   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.086671   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.086691   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.087108   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.087299   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.087375   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:15:47.087836   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35711
	I0927 00:15:47.087993   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.088268   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:47.088664   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.088686   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.088971   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.089142   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:47.089158   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:47.089343   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.089407   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.089773   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.089785   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.089836   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:47.089896   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.089957   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.090107   22734 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:15:47.090275   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:47.090905   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.090934   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.091149   22734 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 00:15:47.091172   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:15:47.091544   22734 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:15:47.091559   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.091252   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.092311   22734 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:15:47.092725   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.092731   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.092788   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.091492   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:47.093345   22734 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:47.093662   22734 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:15:47.093682   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.093370   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.094238   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.095036   22734 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 00:15:47.095866   22734 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:15:47.095883   22734 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:15:47.096689   22734 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:15:47.097390   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.097425   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.097569   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.097590   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.097461   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.097673   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.097673   22734 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:47.097689   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.097689   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:15:47.097710   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.097778   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.098267   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.098299   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.098474   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.098492   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.099022   22734 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:47.099045   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:15:47.099072   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.099198   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.099380   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.099750   22734 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 00:15:47.099763   22734 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:47.101060   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.101474   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.101499   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.101655   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.101821   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.101918   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.101999   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.102270   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.102975   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.103192   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.103003   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.103253   22734 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:47.103269   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 00:15:47.103294   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.103897   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.104068   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.104250   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.104747   22734 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:47.106254   22734 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:15:47.106271   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:15:47.106287   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:47.106593   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.107475   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.107511   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.107885   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.108270   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.108449   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.108632   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:47.110542   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.111094   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:47.111121   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:47.111448   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:47.111735   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:47.111984   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:47.112185   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	W0927 00:15:47.112790   22734 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:55220->192.168.39.24:22: read: connection reset by peer
	I0927 00:15:47.112830   22734 retry.go:31] will retry after 352.602818ms: ssh: handshake failed: read tcp 192.168.39.1:55220->192.168.39.24:22: read: connection reset by peer
	I0927 00:15:47.407193   22734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:15:47.407365   22734 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:15:47.420962   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:47.447316   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:15:47.447344   22734 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:15:47.469527   22734 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:15:47.469557   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:15:47.575354   22734 node_ready.go:35] waiting up to 6m0s for node "addons-921129" to be "Ready" ...
	I0927 00:15:47.579393   22734 node_ready.go:49] node "addons-921129" has status "Ready":"True"
	I0927 00:15:47.579416   22734 node_ready.go:38] duration metric: took 4.035848ms for node "addons-921129" to be "Ready" ...
	I0927 00:15:47.579424   22734 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:47.588216   22734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g7j8q" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:47.629332   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:15:47.635874   22734 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:15:47.635906   22734 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:15:47.638240   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:47.656951   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:47.672418   22734 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:15:47.672434   22734 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:15:47.672451   22734 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:15:47.672465   22734 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:15:47.682429   22734 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:15:47.682452   22734 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:15:47.692573   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:47.772386   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:47.832426   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:47.852774   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:15:47.852804   22734 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:15:47.876822   22734 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:15:47.876850   22734 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:15:47.883587   22734 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:15:47.883607   22734 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:15:47.930089   22734 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:15:47.930116   22734 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:15:47.957403   22734 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:15:47.957429   22734 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:15:47.960186   22734 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:47.960207   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:15:48.120592   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:15:48.143320   22734 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:15:48.143348   22734 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:15:48.170158   22734 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:48.170180   22734 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:15:48.219604   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:48.220498   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:15:48.220514   22734 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:15:48.242740   22734 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:15:48.242765   22734 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:15:48.264675   22734 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:15:48.264699   22734 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:15:48.552835   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:48.555534   22734 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:15:48.555558   22734 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:15:48.621117   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:15:48.621141   22734 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:15:48.733133   22734 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:48.733168   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:15:48.748723   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:15:48.748748   22734 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:15:49.106326   22734 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:15:49.106359   22734 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:15:49.196547   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:49.400191   22734 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:15:49.400225   22734 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:15:49.413933   22734 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:15:49.413968   22734 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:15:49.466924   22734 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:49.466945   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:15:49.617492   22734 pod_ready.go:103] pod "coredns-7c65d6cfc9-g7j8q" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:49.728500   22734 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:15:49.728524   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:15:49.876150   22734 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:49.876266   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:15:49.966983   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:49.991219   22734 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:15:49.991248   22734 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:15:50.132087   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:50.212807   22734 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:15:50.212839   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:15:50.275369   22734 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.867955379s)
	I0927 00:15:50.275406   22734 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 00:15:50.593459   22734 pod_ready.go:93] pod "coredns-7c65d6cfc9-g7j8q" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:50.593483   22734 pod_ready.go:82] duration metric: took 3.005237665s for pod "coredns-7c65d6cfc9-g7j8q" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.593496   22734 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v8bhm" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.631722   22734 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:15:50.631745   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:15:50.781231   22734 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-921129" context rescaled to 1 replicas
	I0927 00:15:50.875200   22734 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:50.875228   22734 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:15:51.105651   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:51.447575   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.026558023s)
	I0927 00:15:51.447636   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.447647   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.448027   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.448054   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.448065   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.448074   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.448388   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.448416   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.562874   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.933495608s)
	I0927 00:15:51.562933   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.562934   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.924663186s)
	I0927 00:15:51.562945   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.562959   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.562970   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.562988   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.906005634s)
	I0927 00:15:51.563027   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.563042   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.563256   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.563276   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.563285   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.563292   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.565434   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:15:51.565436   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.565449   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.565435   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:15:51.565484   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.565520   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.565552   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:15:51.565570   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.565577   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.565586   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.565619   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.565633   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.565641   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.565825   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.565838   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.566656   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:15:51.566661   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.566685   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:51.585742   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:15:51.585766   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:15:51.586137   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:15:51.586214   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:15:51.586229   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:15:52.745223   22734 pod_ready.go:103] pod "coredns-7c65d6cfc9-v8bhm" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:53.114669   22734 pod_ready.go:93] pod "coredns-7c65d6cfc9-v8bhm" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:53.114692   22734 pod_ready.go:82] duration metric: took 2.521187697s for pod "coredns-7c65d6cfc9-v8bhm" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:53.114702   22734 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:53.130670   22734 pod_ready.go:93] pod "etcd-addons-921129" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:53.130692   22734 pod_ready.go:82] duration metric: took 15.983437ms for pod "etcd-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:53.130702   22734 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:53.142162   22734 pod_ready.go:93] pod "kube-apiserver-addons-921129" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:53.142183   22734 pod_ready.go:82] duration metric: took 11.475348ms for pod "kube-apiserver-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:53.142192   22734 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:54.040809   22734 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:15:54.040853   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:54.044507   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:54.044966   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:54.044994   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:54.045262   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:54.045495   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:54.045661   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:54.045833   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:54.758120   22734 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:15:54.951234   22734 addons.go:234] Setting addon gcp-auth=true in "addons-921129"
	I0927 00:15:54.951289   22734 host.go:66] Checking if "addons-921129" exists ...
	I0927 00:15:54.951604   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:54.951658   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:54.968404   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0927 00:15:54.968916   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:54.969495   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:54.969518   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:54.969857   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:54.970458   22734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:15:54.970514   22734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:15:54.987471   22734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I0927 00:15:54.987866   22734 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:15:54.988396   22734 main.go:141] libmachine: Using API Version  1
	I0927 00:15:54.988415   22734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:15:54.988731   22734 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:15:54.988939   22734 main.go:141] libmachine: (addons-921129) Calling .GetState
	I0927 00:15:54.990540   22734 main.go:141] libmachine: (addons-921129) Calling .DriverName
	I0927 00:15:54.990757   22734 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:15:54.990788   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHHostname
	I0927 00:15:54.993516   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:54.993911   22734 main.go:141] libmachine: (addons-921129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:1f:55", ip: ""} in network mk-addons-921129: {Iface:virbr1 ExpiryTime:2024-09-27 01:15:06 +0000 UTC Type:0 Mac:52:54:00:f6:1f:55 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:addons-921129 Clientid:01:52:54:00:f6:1f:55}
	I0927 00:15:54.993939   22734 main.go:141] libmachine: (addons-921129) DBG | domain addons-921129 has defined IP address 192.168.39.24 and MAC address 52:54:00:f6:1f:55 in network mk-addons-921129
	I0927 00:15:54.994100   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHPort
	I0927 00:15:54.994302   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHKeyPath
	I0927 00:15:54.994445   22734 main.go:141] libmachine: (addons-921129) Calling .GetSSHUsername
	I0927 00:15:54.994664   22734 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/addons-921129/id_rsa Username:docker}
	I0927 00:15:55.149156   22734 pod_ready.go:103] pod "kube-controller-manager-addons-921129" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:55.649967   22734 pod_ready.go:93] pod "kube-controller-manager-addons-921129" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:55.649999   22734 pod_ready.go:82] duration metric: took 2.507799212s for pod "kube-controller-manager-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:55.650013   22734 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sw68g" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:55.655720   22734 pod_ready.go:93] pod "kube-proxy-sw68g" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:55.655747   22734 pod_ready.go:82] duration metric: took 5.726089ms for pod "kube-proxy-sw68g" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:55.655761   22734 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:55.661188   22734 pod_ready.go:93] pod "kube-scheduler-addons-921129" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:55.661209   22734 pod_ready.go:82] duration metric: took 5.44067ms for pod "kube-scheduler-addons-921129" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:55.661217   22734 pod_ready.go:39] duration metric: took 8.081782024s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:55.661232   22734 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:15:55.661280   22734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:16:00.564690   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.872071006s)
	I0927 00:16:00.564746   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.564759   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.564777   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.792360368s)
	I0927 00:16:00.564820   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.564832   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.564875   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (12.732412921s)
	I0927 00:16:00.564930   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.564942   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.565014   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.444384919s)
	I0927 00:16:00.565054   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.565067   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.565134   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.345503488s)
	I0927 00:16:00.565151   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.565159   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.565256   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.012385518s)
	I0927 00:16:00.565285   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.565297   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.565410   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.368814286s)
	I0927 00:16:00.565430   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.565440   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.565594   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.598580348s)
	W0927 00:16:00.565626   22734 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:00.565646   22734 retry.go:31] will retry after 175.171134ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:16:00.565731   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.433610424s)
	I0927 00:16:00.565750   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.565771   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568191   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568197   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568211   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568220   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568224   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568236   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568235   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568245   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568248   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568252   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568256   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568263   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568200   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568305   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568327   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568334   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568342   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568345   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568349   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568367   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568389   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568395   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568398   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568403   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568409   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568418   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568447   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568454   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568460   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568471   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568479   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568482   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568495   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568502   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568509   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568461   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.568540   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568227   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568486   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.568634   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568658   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568665   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568722   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568742   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.568763   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568773   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568797   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.568805   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.568813   22734 addons.go:475] Verifying addon ingress=true in "addons-921129"
	I0927 00:16:00.569177   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.569216   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.569223   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.569292   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.569319   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.569329   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.569427   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.569454   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.569481   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.569492   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.569502   22734 addons.go:475] Verifying addon registry=true in "addons-921129"
	I0927 00:16:00.570890   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.570921   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.571139   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.571235   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.571237   22734 out.go:177] * Verifying ingress addon...
	I0927 00:16:00.571248   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.571316   22734 out.go:177] * Verifying registry addon...
	I0927 00:16:00.571256   22734 addons.go:475] Verifying addon metrics-server=true in "addons-921129"
	I0927 00:16:00.572810   22734 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-921129 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:16:00.573618   22734 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:16:00.573688   22734 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:16:00.613874   22734 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:16:00.613901   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:00.614129   22734 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:16:00.614140   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:00.694902   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:00.694930   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:00.695251   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:00.695304   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:00.695320   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:00.741838   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:16:01.197384   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.198390   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:01.568656   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.462946825s)
	I0927 00:16:01.568697   22734 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.577914685s)
	I0927 00:16:01.568708   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:01.568722   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:01.568831   22734 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.907522157s)
	I0927 00:16:01.568884   22734 api_server.go:72] duration metric: took 14.658524139s to wait for apiserver process to appear ...
	I0927 00:16:01.568899   22734 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:16:01.568990   22734 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0927 00:16:01.569194   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:01.569222   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:01.569232   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:01.569246   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:01.569254   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:01.569544   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:01.569563   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:01.569573   22734 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-921129"
	I0927 00:16:01.570474   22734 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:16:01.571320   22734 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:16:01.573228   22734 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:16:01.574217   22734 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:16:01.574475   22734 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:16:01.574497   22734 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:16:01.592144   22734 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0927 00:16:01.596061   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:01.596444   22734 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:16:01.596468   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:01.596471   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:01.600323   22734 api_server.go:141] control plane version: v1.31.1
	I0927 00:16:01.600354   22734 api_server.go:131] duration metric: took 31.445585ms to wait for apiserver health ...
	I0927 00:16:01.600364   22734 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:16:01.626070   22734 system_pods.go:59] 18 kube-system pods found
	I0927 00:16:01.626131   22734 system_pods.go:61] "coredns-7c65d6cfc9-g7j8q" [a85dd1ba-5dae-4791-8f7a-248a178b7c80] Running
	I0927 00:16:01.626146   22734 system_pods.go:61] "coredns-7c65d6cfc9-v8bhm" [b1e0598e-0eed-4d5d-9d48-18d09840c7ba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0927 00:16:01.626157   22734 system_pods.go:61] "csi-hostpath-attacher-0" [84adc844-7fd2-44ae-a188-52bd0bb9d7ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:01.626168   22734 system_pods.go:61] "csi-hostpath-resizer-0" [026c5341-8604-4261-bb77-c88889027c24] Pending
	I0927 00:16:01.626213   22734 system_pods.go:61] "csi-hostpathplugin-p2v5r" [478aa559-7c28-4d88-94be-60a75a96d5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:01.626230   22734 system_pods.go:61] "etcd-addons-921129" [aaa7a350-8ced-4838-86a2-c786194fa78c] Running
	I0927 00:16:01.626237   22734 system_pods.go:61] "kube-apiserver-addons-921129" [ee56f2fa-9f72-45ab-85b2-266da3706366] Running
	I0927 00:16:01.626243   22734 system_pods.go:61] "kube-controller-manager-addons-921129" [c266bc22-512b-4942-8f8c-d38e2b8aec53] Running
	I0927 00:16:01.626252   22734 system_pods.go:61] "kube-ingress-dns-minikube" [49301ec5-b012-4ef9-a0ce-2ed5cca249c6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:01.626293   22734 system_pods.go:61] "kube-proxy-sw68g" [1b74e057-397d-442b-9bbf-5f2ea77c6e55] Running
	I0927 00:16:01.626304   22734 system_pods.go:61] "kube-scheduler-addons-921129" [011579b6-ce28-47bc-bdcd-5a7c67c37c93] Running
	I0927 00:16:01.626316   22734 system_pods.go:61] "metrics-server-84c5f94fbc-t2mww" [ce1f5642-6ded-410b-b64d-44b41bb0286e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:01.626329   22734 system_pods.go:61] "nvidia-device-plugin-daemonset-49wpz" [1061b6c1-f508-4d17-a693-c6231549ad3b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:01.626341   22734 system_pods.go:61] "registry-66c9cd494c-fwsrk" [6f46ca63-ee6e-40a2-847d-0027eb2fd753] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:01.626353   22734 system_pods.go:61] "registry-proxy-4k4lw" [a0e1613c-f205-4152-b2b1-2310f2f418b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:01.626367   22734 system_pods.go:61] "snapshot-controller-56fcc65765-8nn9x" [e7d09d25-92e7-4161-9432-f377b8c2cb8b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:01.626385   22734 system_pods.go:61] "snapshot-controller-56fcc65765-nvwjp" [9e99a783-dff1-4e06-b1d8-ea1aa110d963] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:01.626411   22734 system_pods.go:61] "storage-provisioner" [0a880540-f16a-4269-85b4-2422c93f1f53] Running
	I0927 00:16:01.626423   22734 system_pods.go:74] duration metric: took 26.051544ms to wait for pod list to return data ...
	I0927 00:16:01.626437   22734 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:16:01.634046   22734 default_sa.go:45] found service account: "default"
	I0927 00:16:01.634079   22734 default_sa.go:55] duration metric: took 7.632153ms for default service account to be created ...
	I0927 00:16:01.634091   22734 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:16:01.643109   22734 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:16:01.643138   22734 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:16:01.684995   22734 system_pods.go:86] 18 kube-system pods found
	I0927 00:16:01.685033   22734 system_pods.go:89] "coredns-7c65d6cfc9-g7j8q" [a85dd1ba-5dae-4791-8f7a-248a178b7c80] Running
	I0927 00:16:01.685042   22734 system_pods.go:89] "coredns-7c65d6cfc9-v8bhm" [b1e0598e-0eed-4d5d-9d48-18d09840c7ba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0927 00:16:01.685051   22734 system_pods.go:89] "csi-hostpath-attacher-0" [84adc844-7fd2-44ae-a188-52bd0bb9d7ea] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:16:01.685060   22734 system_pods.go:89] "csi-hostpath-resizer-0" [026c5341-8604-4261-bb77-c88889027c24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:16:01.685075   22734 system_pods.go:89] "csi-hostpathplugin-p2v5r" [478aa559-7c28-4d88-94be-60a75a96d5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:16:01.685082   22734 system_pods.go:89] "etcd-addons-921129" [aaa7a350-8ced-4838-86a2-c786194fa78c] Running
	I0927 00:16:01.685089   22734 system_pods.go:89] "kube-apiserver-addons-921129" [ee56f2fa-9f72-45ab-85b2-266da3706366] Running
	I0927 00:16:01.685095   22734 system_pods.go:89] "kube-controller-manager-addons-921129" [c266bc22-512b-4942-8f8c-d38e2b8aec53] Running
	I0927 00:16:01.685105   22734 system_pods.go:89] "kube-ingress-dns-minikube" [49301ec5-b012-4ef9-a0ce-2ed5cca249c6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0927 00:16:01.685110   22734 system_pods.go:89] "kube-proxy-sw68g" [1b74e057-397d-442b-9bbf-5f2ea77c6e55] Running
	I0927 00:16:01.685118   22734 system_pods.go:89] "kube-scheduler-addons-921129" [011579b6-ce28-47bc-bdcd-5a7c67c37c93] Running
	I0927 00:16:01.685130   22734 system_pods.go:89] "metrics-server-84c5f94fbc-t2mww" [ce1f5642-6ded-410b-b64d-44b41bb0286e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:16:01.685139   22734 system_pods.go:89] "nvidia-device-plugin-daemonset-49wpz" [1061b6c1-f508-4d17-a693-c6231549ad3b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:16:01.685148   22734 system_pods.go:89] "registry-66c9cd494c-fwsrk" [6f46ca63-ee6e-40a2-847d-0027eb2fd753] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:16:01.685156   22734 system_pods.go:89] "registry-proxy-4k4lw" [a0e1613c-f205-4152-b2b1-2310f2f418b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:16:01.685168   22734 system_pods.go:89] "snapshot-controller-56fcc65765-8nn9x" [e7d09d25-92e7-4161-9432-f377b8c2cb8b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:01.685176   22734 system_pods.go:89] "snapshot-controller-56fcc65765-nvwjp" [9e99a783-dff1-4e06-b1d8-ea1aa110d963] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:16:01.685183   22734 system_pods.go:89] "storage-provisioner" [0a880540-f16a-4269-85b4-2422c93f1f53] Running
	I0927 00:16:01.685196   22734 system_pods.go:126] duration metric: took 51.09789ms to wait for k8s-apps to be running ...
	I0927 00:16:01.685208   22734 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:16:01.685256   22734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:16:01.791475   22734 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:01.791501   22734 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:16:01.842183   22734 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:16:02.082563   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.082566   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:02.082717   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:02.585946   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.586809   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:02.587115   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:02.887164   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.145278919s)
	I0927 00:16:02.887200   22734 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.201919593s)
	I0927 00:16:02.887213   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:02.887226   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:02.887227   22734 system_svc.go:56] duration metric: took 1.202018086s WaitForService to wait for kubelet
	I0927 00:16:02.887237   22734 kubeadm.go:582] duration metric: took 15.97687838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:16:02.887260   22734 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:16:02.887515   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:02.887529   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:02.887538   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:02.887544   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:02.887732   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:02.887746   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:02.887749   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:02.891877   22734 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 00:16:02.891909   22734 node_conditions.go:123] node cpu capacity is 2
	I0927 00:16:02.891927   22734 node_conditions.go:105] duration metric: took 4.65831ms to run NodePressure ...
	I0927 00:16:02.891941   22734 start.go:241] waiting for startup goroutines ...
	I0927 00:16:03.078741   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:03.079388   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.080393   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:03.310383   22734 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.468151568s)
	I0927 00:16:03.310441   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:03.310457   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:03.310726   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:03.310742   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:03.310751   22734 main.go:141] libmachine: Making call to close driver server
	I0927 00:16:03.310771   22734 main.go:141] libmachine: (addons-921129) Calling .Close
	I0927 00:16:03.311111   22734 main.go:141] libmachine: (addons-921129) DBG | Closing plugin on server side
	I0927 00:16:03.311153   22734 main.go:141] libmachine: Successfully made call to close driver server
	I0927 00:16:03.311164   22734 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 00:16:03.312470   22734 addons.go:475] Verifying addon gcp-auth=true in "addons-921129"
	I0927 00:16:03.314313   22734 out.go:177] * Verifying gcp-auth addon...
	I0927 00:16:03.316721   22734 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:16:03.344801   22734 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:16:03.579884   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:03.580270   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:03.581044   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.079319   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.079673   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:04.079694   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.579870   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:04.579973   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:04.580372   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.079448   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:05.080611   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.080667   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:05.579766   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:05.580436   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.580461   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:06.078417   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.080295   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:06.080827   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.578374   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.579268   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.580477   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.119241   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.119319   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:07.119562   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.582838   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:07.583072   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:07.583167   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.077489   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:08.080376   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:08.082010   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.584609   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:08.585207   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:08.585337   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.080948   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:09.082506   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:09.083595   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.584343   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:09.585110   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:09.585632   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.079317   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.079581   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:10.079912   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:10.579295   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:10.579540   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:10.579895   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.078662   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:11.078840   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:11.079029   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.578635   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:11.579444   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:11.579604   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.077800   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:12.078210   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:12.079067   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.579231   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:12.579718   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:12.580625   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.078744   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:13.079047   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:13.079565   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.580334   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:13.580454   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:13.580712   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.080134   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:14.080421   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.080809   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.579430   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:14.579732   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:14.580010   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.212191   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:15.213050   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:15.213668   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.579333   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:15.579634   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.580932   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:16.078990   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:16.079146   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:16.079922   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.630492   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:16.630914   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.631351   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.256102   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:17.256362   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.256541   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.581087   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:17.581442   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:17.581618   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.079431   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.079483   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:18.080576   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:18.584733   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:18.584922   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.585303   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.078134   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:19.078380   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.079412   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.579815   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:19.580444   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:19.580594   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.078363   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:20.078416   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:20.080808   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.578652   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:20.578969   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:20.579347   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.079075   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:21.079227   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:21.079881   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.578701   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:21.579827   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.580193   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.098391   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:22.098512   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.099182   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.581034   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:22.581105   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:22.581406   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.078610   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:23.078608   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:23.079316   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.579313   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:23.579405   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.579418   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:24.079348   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:24.079697   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:24.081586   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.578672   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:24.579411   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.579646   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:25.080401   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.080716   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:25.080864   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:25.580732   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:25.580802   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.581130   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:26.079538   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:26.079750   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:26.080467   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.579750   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:26.579820   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:26.582294   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.079714   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:27.080369   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.081065   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:27.579924   22734 kapi.go:107] duration metric: took 27.006230821s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:16:27.580079   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:27.580550   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.079196   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:28.079255   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.578864   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:28.579504   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:29.077862   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:29.079555   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:29.578893   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:29.579152   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:30.079213   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:30.079389   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:30.580159   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:30.580255   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:31.079785   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:31.079800   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:31.579132   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:31.579273   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:32.131712   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:32.133763   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:32.580513   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:32.580835   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:33.080013   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:33.080106   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:33.579692   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:33.579859   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:34.079706   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:34.079860   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:34.579325   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:34.579648   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:35.080057   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:35.080253   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:35.579064   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:35.579284   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:36.077754   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:36.078578   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:36.583815   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:36.583830   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:37.080410   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:37.080877   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:37.587110   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:37.587192   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:38.078103   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.078540   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:38.579454   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:38.579592   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.079913   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.082124   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:39.578943   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:39.579652   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.078649   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.079857   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:40.579987   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:40.580117   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.079537   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.081199   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:41.577394   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:41.579805   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.078529   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.078764   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:42.579176   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:42.580280   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.079108   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:43.079540   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.589960   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:43.590959   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.079739   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:44.083535   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.579270   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:44.582372   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.095560   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.095634   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:45.579485   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:45.579564   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.079473   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.079573   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:46.579676   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:46.580276   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.079686   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:47.080674   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.578697   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:47.579124   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.080630   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:48.080850   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.579608   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:48.579756   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.080208   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:49.080354   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.579633   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:49.579841   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.078125   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.079780   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:50.578434   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:50.579128   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.080477   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.080911   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:51.579529   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:51.579845   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.080002   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.081411   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:52.582004   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:52.583602   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.130209   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:53.130265   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.578756   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:53.580366   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.077805   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.080498   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:54.598777   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:54.599003   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.080860   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.081354   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:55.578904   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:55.579245   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:56.079165   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.079766   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:56.579247   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:56.579601   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.079577   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.079806   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:57.580287   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:57.580410   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.178628   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.178735   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:58.581051   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:58.581534   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.088488   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.089871   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:16:59.600262   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:59.600327   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.080360   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:00.081558   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.580013   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:00.582032   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.078341   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.079421   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:01.580370   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:01.580672   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:02.080250   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.080702   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:02.582007   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:02.582528   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.080352   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.080536   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:03.579780   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:03.580124   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.078658   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.081035   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:04.580060   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:04.580153   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.079441   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.079644   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:05.577682   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:05.578904   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.290751   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:06.290800   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.581414   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:06.581700   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.079218   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.079773   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:07.579921   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:07.580525   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.078215   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.078409   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:08.579948   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:08.580009   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.078646   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.078806   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:09.579060   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:09.580933   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.079858   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.080301   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:10.580835   22734 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:17:10.581271   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.078498   22734 kapi.go:107] duration metric: took 1m10.504878129s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:17:11.079867   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:11.579069   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.079192   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:12.579160   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.079642   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:13.580952   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.079793   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:14.588719   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:17:15.078804   22734 kapi.go:107] duration metric: took 1m13.504587059s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:17:26.821575   22734 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:17:26.821597   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.320318   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:27.820836   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.320945   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:28.820637   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.319959   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:29.820166   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.321175   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:30.820890   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.320746   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:31.820730   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.321114   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:32.820389   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.321063   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:33.820240   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.321032   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:34.822027   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.320150   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:35.820522   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.320944   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:36.820585   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.320106   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:37.820930   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.321209   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:38.821879   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.320421   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:39.821205   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.320906   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:40.822579   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.325937   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:41.821489   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.332558   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:42.821326   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.321444   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:43.820834   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.320673   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:44.820734   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.320241   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:45.820957   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.320696   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:46.820970   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.320087   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:47.821083   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.321480   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:48.821378   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.320805   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:49.820292   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.320607   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:50.821594   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.321066   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:51.819767   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.320413   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:52.822473   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.325877   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:53.822564   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.321322   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:54.820927   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.320680   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:55.819392   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.320932   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:56.820614   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.321065   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:57.820836   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:58.320484   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:58.821317   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:59.320558   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:59.820925   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:00.320150   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:00.820769   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:01.320382   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:01.820731   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:02.320839   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:02.819969   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:03.323347   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:03.820745   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:04.320420   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:04.827019   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:05.320348   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:05.821000   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:06.320199   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:06.820150   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:07.328112   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:07.821129   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:08.320760   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:08.820078   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:09.320848   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:09.820260   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:10.320460   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:10.821368   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:11.320932   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:11.820662   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:12.320707   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:12.820359   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:13.323374   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:13.820848   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:14.320351   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:14.820876   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:15.320682   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:15.820130   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:16.320391   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:16.821217   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:17.325484   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:17.821035   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:18.320471   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:18.820685   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:19.321689   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:19.819907   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:20.320482   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:20.820882   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:21.320477   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:21.820284   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:22.320151   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:22.821265   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:23.323739   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:23.820501   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:24.321523   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:24.820770   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:25.320333   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:25.820750   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:26.320283   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:26.821048   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:27.321864   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:27.821570   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:28.321867   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:28.820480   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:29.321508   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:29.821000   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:30.320553   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:30.821705   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:31.321428   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:31.821017   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:32.320665   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:32.820207   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:33.320704   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:33.897167   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:34.321107   22734 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:18:34.820952   22734 kapi.go:107] duration metric: took 2m31.504228741s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:18:34.822886   22734 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-921129 cluster.
	I0927 00:18:34.824372   22734 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:18:34.825663   22734 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:18:34.827152   22734 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, default-storageclass, inspektor-gadget, volcano, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0927 00:18:34.828346   22734 addons.go:510] duration metric: took 2m47.917980493s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner default-storageclass inspektor-gadget volcano nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0927 00:18:34.828389   22734 start.go:246] waiting for cluster config update ...
	I0927 00:18:34.828406   22734 start.go:255] writing updated cluster config ...
	I0927 00:18:34.828719   22734 ssh_runner.go:195] Run: rm -f paused
	I0927 00:18:34.881969   22734 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:18:34.883532   22734 out.go:177] * Done! kubectl is now configured to use "addons-921129" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 00:28:11 addons-921129 dockerd[1202]: time="2024-09-27T00:28:11.180529638Z" level=warning msg="cleaning up after shim disconnected" id=d02954a0a5443fab630086258ff6978ac64804be236bc38d90fb0b83d7759ced namespace=moby
	Sep 27 00:28:11 addons-921129 dockerd[1202]: time="2024-09-27T00:28:11.180542335Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:13 addons-921129 dockerd[1195]: time="2024-09-27T00:28:13.096330916Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=71db3576224e83a2 traceID=7a888d0b9a0976c2ae8fd0c5790de3f1
	Sep 27 00:28:13 addons-921129 dockerd[1195]: time="2024-09-27T00:28:13.100036055Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=71db3576224e83a2 traceID=7a888d0b9a0976c2ae8fd0c5790de3f1
	Sep 27 00:28:31 addons-921129 dockerd[1202]: time="2024-09-27T00:28:31.605365035Z" level=info msg="shim disconnected" id=5eb96b6fd22193e7d4cf084fa4913a150ae3d77fbd42a1d0c34fbc4e617b5f94 namespace=moby
	Sep 27 00:28:31 addons-921129 dockerd[1202]: time="2024-09-27T00:28:31.605437774Z" level=warning msg="cleaning up after shim disconnected" id=5eb96b6fd22193e7d4cf084fa4913a150ae3d77fbd42a1d0c34fbc4e617b5f94 namespace=moby
	Sep 27 00:28:31 addons-921129 dockerd[1202]: time="2024-09-27T00:28:31.605454366Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:31 addons-921129 dockerd[1195]: time="2024-09-27T00:28:31.605707886Z" level=info msg="ignoring event" container=5eb96b6fd22193e7d4cf084fa4913a150ae3d77fbd42a1d0c34fbc4e617b5f94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.101426884Z" level=info msg="shim disconnected" id=ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5 namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.101500356Z" level=warning msg="cleaning up after shim disconnected" id=ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5 namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.101510644Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1195]: time="2024-09-27T00:28:32.102220742Z" level=info msg="ignoring event" container=ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:32 addons-921129 dockerd[1195]: time="2024-09-27T00:28:32.197755316Z" level=info msg="ignoring event" container=ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.198649440Z" level=info msg="shim disconnected" id=ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.198698804Z" level=warning msg="cleaning up after shim disconnected" id=ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.198706263Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.283603953Z" level=info msg="shim disconnected" id=e9259af860792accd9ce40238206edbb17cedbfc72e6eaab2e7eed10f5456b5a namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.283810906Z" level=warning msg="cleaning up after shim disconnected" id=e9259af860792accd9ce40238206edbb17cedbfc72e6eaab2e7eed10f5456b5a namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1195]: time="2024-09-27T00:28:32.284142498Z" level=info msg="ignoring event" container=e9259af860792accd9ce40238206edbb17cedbfc72e6eaab2e7eed10f5456b5a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.284513358Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.429215259Z" level=info msg="shim disconnected" id=31875309945aa8136194c7b05cafa2ad7d3284a85daf6d35c260f56bc1604fe8 namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.429699093Z" level=warning msg="cleaning up after shim disconnected" id=31875309945aa8136194c7b05cafa2ad7d3284a85daf6d35c260f56bc1604fe8 namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.429901178Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:28:32 addons-921129 dockerd[1195]: time="2024-09-27T00:28:32.429891193Z" level=info msg="ignoring event" container=31875309945aa8136194c7b05cafa2ad7d3284a85daf6d35c260f56bc1604fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:28:32 addons-921129 dockerd[1202]: time="2024-09-27T00:28:32.453519292Z" level=warning msg="cleanup warnings time=\"2024-09-27T00:28:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e939c52d905d       a416a98b71e22                                                                                                                25 seconds ago      Exited              helper-pod                0                   011e48a17526c       helper-pod-delete-pvc-8efc34d2-173d-43c8-a797-ed9149a8a1e5
	25d236009c3a4       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              30 seconds ago      Exited              busybox                   0                   0cabe2dfb92bc       test-local-path
	fc4cd2c89899c       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              35 seconds ago      Exited              helper-pod                0                   85237805dc1ea       helper-pod-create-pvc-8efc34d2-173d-43c8-a797-ed9149a8a1e5
	273710df64365       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  44 seconds ago      Running             hello-world-app           0                   8bf912dd117ab       hello-world-app-55bf9c44b4-5d744
	133b54ceb8be6       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                53 seconds ago      Running             nginx                     0                   9a5b3db47a1f4       nginx
	8c3fe8b57d995       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   60da43a1f4b64       gcp-auth-89d5ffd79-dz8bp
	488f3a94bd24e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   083a2404e5e20       ingress-nginx-admission-patch-7vckb
	c1ab42e4b3228       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   342b1e4333354       ingress-nginx-admission-create-dpz7t
	a6372a28735a9       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner    0                   e8c39081da4c5       local-path-provisioner-86d989889c-xjw9q
	c1d3752fb1788       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   26a1ab7a9fd3b       storage-provisioner
	7c769dbecf9d3       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   1c258d549a2c4       coredns-7c65d6cfc9-g7j8q
	d7869061ef98a       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   41b7566f5cd4e       kube-proxy-sw68g
	83c44c5bbf5e5       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   cf7bc98d9c8ce       etcd-addons-921129
	32045d5d3bf67       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   b7b955c623a65       kube-controller-manager-addons-921129
	17749e2a8e184       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   8b7ebec7f0280       kube-apiserver-addons-921129
	a272eb306b03d       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   78c7c0bed3482       kube-scheduler-addons-921129
	
	
	==> coredns [7c769dbecf9d] <==
	[INFO] 10.244.0.21:42711 - 6145 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095222s
	[INFO] 10.244.0.21:54673 - 54762 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.001409567s
	[INFO] 10.244.0.21:42711 - 33053 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000119317s
	[INFO] 10.244.0.21:54673 - 52066 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000152933s
	[INFO] 10.244.0.21:42711 - 19880 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000106952s
	[INFO] 10.244.0.21:54673 - 41636 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00015552s
	[INFO] 10.244.0.21:42711 - 27836 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0009128s
	[INFO] 10.244.0.21:42711 - 51293 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.003457494s
	[INFO] 10.244.0.21:42711 - 64226 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.001996055s
	[INFO] 10.244.0.21:54673 - 57437 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000960754s
	[INFO] 10.244.0.21:54673 - 33937 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00075879s
	[INFO] 10.244.0.21:56063 - 11855 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000134002s
	[INFO] 10.244.0.21:36117 - 868 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075338s
	[INFO] 10.244.0.21:36117 - 41229 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092455s
	[INFO] 10.244.0.21:36117 - 33062 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005663s
	[INFO] 10.244.0.21:36117 - 8722 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000110284s
	[INFO] 10.244.0.21:56063 - 18295 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000639s
	[INFO] 10.244.0.21:36117 - 30837 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098474s
	[INFO] 10.244.0.21:56063 - 18255 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072963s
	[INFO] 10.244.0.21:36117 - 34162 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074618s
	[INFO] 10.244.0.21:56063 - 59193 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000202142s
	[INFO] 10.244.0.21:36117 - 13903 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041379s
	[INFO] 10.244.0.21:56063 - 53787 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118982s
	[INFO] 10.244.0.21:56063 - 58790 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00013809s
	[INFO] 10.244.0.21:56063 - 49271 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081515s
	
	
	==> describe nodes <==
	Name:               addons-921129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-921129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-921129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-921129
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-921129
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:28:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:28:17 +0000   Fri, 27 Sep 2024 00:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:28:17 +0000   Fri, 27 Sep 2024 00:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:28:17 +0000   Fri, 27 Sep 2024 00:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:28:17 +0000   Fri, 27 Sep 2024 00:15:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    addons-921129
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a49e861f9754553aab993c87d6b320b
	  System UUID:                1a49e861-f975-4553-aab9-93c87d6b320b
	  Boot ID:                    bd315a6c-7149-415c-b873-e2af185f2f21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     hello-world-app-55bf9c44b4-5d744           0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  gcp-auth                    gcp-auth-89d5ffd79-dz8bp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-g7j8q                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-921129                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-921129               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-921129      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-sw68g                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-921129               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xjw9q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-921129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-921129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-921129 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-921129 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-921129 event: Registered Node addons-921129 in Controller
	
	
	==> dmesg <==
	[  +5.507166] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.483911] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.114746] kauditd_printk_skb: 7 callbacks suppressed
	[Sep27 00:17] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.734110] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.982910] kauditd_printk_skb: 16 callbacks suppressed
	[ +34.104375] kauditd_printk_skb: 32 callbacks suppressed
	[Sep27 00:18] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.921036] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.537368] kauditd_printk_skb: 9 callbacks suppressed
	[ +17.190160] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.831407] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 00:19] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.390956] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 00:22] kauditd_printk_skb: 28 callbacks suppressed
	[Sep27 00:27] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.443435] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.397020] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.962758] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.423899] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.532668] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.499852] kauditd_printk_skb: 43 callbacks suppressed
	[Sep27 00:28] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.723215] kauditd_printk_skb: 64 callbacks suppressed
	[ +21.109275] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [83c44c5bbf5e] <==
	{"level":"info","ts":"2024-09-27T00:16:17.234673Z","caller":"traceutil/trace.go:171","msg":"trace[1282606132] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:972; }","duration":"175.257091ms","start":"2024-09-27T00:16:17.059412Z","end":"2024-09-27T00:16:17.234669Z","steps":["trace[1282606132] 'range keys from in-memory index tree'  (duration: 175.041637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:21.974178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.997836ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:16:21.974252Z","caller":"traceutil/trace.go:171","msg":"trace[1841415705] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:982; }","duration":"183.068731ms","start":"2024-09-27T00:16:21.791161Z","end":"2024-09-27T00:16:21.974230Z","steps":["trace[1841415705] 'range keys from in-memory index tree'  (duration: 182.989411ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:16:21.974359Z","caller":"traceutil/trace.go:171","msg":"trace[1289327170] linearizableReadLoop","detail":"{readStateIndex:1005; appliedIndex:1004; }","duration":"173.040012ms","start":"2024-09-27T00:16:21.801313Z","end":"2024-09-27T00:16:21.974353Z","steps":["trace[1289327170] 'read index received'  (duration: 172.455864ms)","trace[1289327170] 'applied index is now lower than readState.Index'  (duration: 583.776µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:16:21.974501Z","caller":"traceutil/trace.go:171","msg":"trace[190906838] transaction","detail":"{read_only:false; response_revision:983; number_of_response:1; }","duration":"307.72441ms","start":"2024-09-27T00:16:21.666767Z","end":"2024-09-27T00:16:21.974491Z","steps":["trace[190906838] 'process raft request'  (duration: 307.047364ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:21.974953Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:16:21.666750Z","time spent":"307.766833ms","remote":"127.0.0.1:44056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:979 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-09-27T00:16:21.975118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.81682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:16:21.975137Z","caller":"traceutil/trace.go:171","msg":"trace[418857881] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:983; }","duration":"173.837125ms","start":"2024-09-27T00:16:21.801293Z","end":"2024-09-27T00:16:21.975130Z","steps":["trace[418857881] 'agreement among raft nodes before linearized reading'  (duration: 173.80057ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:16:30.845485Z","caller":"traceutil/trace.go:171","msg":"trace[1165095192] transaction","detail":"{read_only:false; response_revision:1017; number_of_response:1; }","duration":"181.884324ms","start":"2024-09-27T00:16:30.663586Z","end":"2024-09-27T00:16:30.845470Z","steps":["trace[1165095192] 'process raft request'  (duration: 181.456279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:32.107165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.839757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:16:32.107224Z","caller":"traceutil/trace.go:171","msg":"trace[48873040] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1020; }","duration":"229.902046ms","start":"2024-09-27T00:16:31.877312Z","end":"2024-09-27T00:16:32.107214Z","steps":["trace[48873040] 'range keys from in-memory index tree'  (duration: 229.796676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:32.107353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"306.551042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:16:32.107370Z","caller":"traceutil/trace.go:171","msg":"trace[1575285595] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"306.572049ms","start":"2024-09-27T00:16:31.800792Z","end":"2024-09-27T00:16:32.107364Z","steps":["trace[1575285595] 'range keys from in-memory index tree'  (duration: 306.502119ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:32.107502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.512055ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:16:32.107545Z","caller":"traceutil/trace.go:171","msg":"trace[147186617] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1020; }","duration":"316.450269ms","start":"2024-09-27T00:16:31.791083Z","end":"2024-09-27T00:16:32.107533Z","steps":["trace[147186617] 'range keys from in-memory index tree'  (duration: 315.499541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:32.109635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T00:16:31.800758Z","time spent":"308.798624ms","remote":"127.0.0.1:44072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-27T00:17:06.266950Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.744245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:17:06.267144Z","caller":"traceutil/trace.go:171","msg":"trace[697894474] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1200; }","duration":"210.988397ms","start":"2024-09-27T00:17:06.056138Z","end":"2024-09-27T00:17:06.267126Z","steps":["trace[697894474] 'range keys from in-memory index tree'  (duration: 210.647531ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:17:06.266987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.997905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:17:06.267657Z","caller":"traceutil/trace.go:171","msg":"trace[157572687] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1200; }","duration":"210.752591ms","start":"2024-09-27T00:17:06.056893Z","end":"2024-09-27T00:17:06.267646Z","steps":["trace[157572687] 'range keys from in-memory index tree'  (duration: 209.942458ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:17:14.516490Z","caller":"traceutil/trace.go:171","msg":"trace[34040564] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"108.181815ms","start":"2024-09-27T00:17:14.408290Z","end":"2024-09-27T00:17:14.516472Z","steps":["trace[34040564] 'process raft request'  (duration: 107.867924ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:18:59.331071Z","caller":"traceutil/trace.go:171","msg":"trace[1919319571] transaction","detail":"{read_only:false; response_revision:1531; number_of_response:1; }","duration":"222.547197ms","start":"2024-09-27T00:18:59.108274Z","end":"2024-09-27T00:18:59.330821Z","steps":["trace[1919319571] 'process raft request'  (duration: 222.433625ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:25:37.834199Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1834}
	{"level":"info","ts":"2024-09-27T00:25:37.934261Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1834,"took":"99.198854ms","hash":1639412972,"current-db-size-bytes":9195520,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":4882432,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-27T00:25:37.934384Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1639412972,"revision":1834,"compact-revision":-1}
	
	
	==> gcp-auth [8c3fe8b57d99] <==
	2024/09/27 00:19:17 Ready to write response ...
	2024/09/27 00:19:18 Ready to marshal response ...
	2024/09/27 00:19:18 Ready to write response ...
	2024/09/27 00:27:21 Ready to marshal response ...
	2024/09/27 00:27:21 Ready to write response ...
	2024/09/27 00:27:21 Ready to marshal response ...
	2024/09/27 00:27:21 Ready to write response ...
	2024/09/27 00:27:21 Ready to marshal response ...
	2024/09/27 00:27:21 Ready to write response ...
	2024/09/27 00:27:28 Ready to marshal response ...
	2024/09/27 00:27:28 Ready to write response ...
	2024/09/27 00:27:31 Ready to marshal response ...
	2024/09/27 00:27:31 Ready to write response ...
	2024/09/27 00:27:38 Ready to marshal response ...
	2024/09/27 00:27:38 Ready to write response ...
	2024/09/27 00:27:46 Ready to marshal response ...
	2024/09/27 00:27:46 Ready to write response ...
	2024/09/27 00:27:53 Ready to marshal response ...
	2024/09/27 00:27:53 Ready to write response ...
	2024/09/27 00:27:56 Ready to marshal response ...
	2024/09/27 00:27:56 Ready to write response ...
	2024/09/27 00:27:56 Ready to marshal response ...
	2024/09/27 00:27:56 Ready to write response ...
	2024/09/27 00:28:07 Ready to marshal response ...
	2024/09/27 00:28:07 Ready to write response ...
	
	
	==> kernel <==
	 00:28:33 up 13 min,  0 users,  load average: 1.16, 0.93, 0.67
	Linux addons-921129 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [17749e2a8e18] <==
	I0927 00:27:32.505575       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0927 00:27:33.662046       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:27:36.718682       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:27:38.163790       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:27:38.352803       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.255.229"}
	I0927 00:27:45.584691       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0927 00:27:46.893973       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.95.156"}
	E0927 00:27:48.887427       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0927 00:27:50.100940       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0927 00:27:50.107775       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0927 00:28:04.502518       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0927 00:28:10.708330       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:10.708382       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:10.731537       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:10.731685       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:10.770421       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:10.770818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:10.781995       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:10.782262       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:28:10.813765       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:28:10.813830       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:28:11.771668       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:28:11.815823       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:28:11.847447       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0927 00:28:23.720367       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [32045d5d3bf6] <==
	E0927 00:28:15.782670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:16.214986       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:16.215340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:16.639071       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0927 00:28:16.639348       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 00:28:16.844789       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0927 00:28:16.845156       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 00:28:17.505896       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-921129"
	W0927 00:28:19.148421       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:19.148464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:21.810011       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:21.810134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:21.959380       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:21.959418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:24.762777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:24.762837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:26.494541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:26.494591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:29.842392       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:29.842447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:28:30.236056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:30.236112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:28:32.018795       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.427µs"
	W0927 00:28:33.118068       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:28:33.118160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d7869061ef98] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:15:48.390893       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:15:48.414967       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.24"]
	E0927 00:15:48.415048       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:15:48.523877       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:15:48.523916       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:15:48.523938       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:15:48.527998       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:15:48.528243       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:15:48.528255       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:15:48.540129       1 config.go:328] "Starting node config controller"
	I0927 00:15:48.540145       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:15:48.553338       1 config.go:199] "Starting service config controller"
	I0927 00:15:48.553354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:15:48.553555       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:15:48.553561       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:15:48.559365       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:15:48.642646       1 shared_informer.go:320] Caches are synced for node config
	I0927 00:15:48.655802       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [a272eb306b03] <==
	W0927 00:15:40.209636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:15:40.209777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.314064       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:15:40.314194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.344146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:15:40.344268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.354027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:15:40.354143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.402970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:15:40.403018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.419201       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:15:40.419256       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:15:40.433278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:15:40.433327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.524114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:15:40.524242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.536723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:15:40.536895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.543572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:15:40.543710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.566813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:40.567060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:40.577663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:40.577712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0927 00:15:42.106842       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:28:13 addons-921129 kubelet[1969]: E0927 00:28:13.100729    1969 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6c2k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,Su
bPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(4e40928b-b56c-4bce-8a6c-13ca99506e7a): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 27 00:28:13 addons-921129 kubelet[1969]: E0927 00:28:13.102160    1969 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="4e40928b-b56c-4bce-8a6c-13ca99506e7a"
	Sep 27 00:28:13 addons-921129 kubelet[1969]: I0927 00:28:13.923206    1969 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fece3e2-0d9d-421a-99c2-cedfd981f573" path="/var/lib/kubelet/pods/4fece3e2-0d9d-421a-99c2-cedfd981f573/volumes"
	Sep 27 00:28:19 addons-921129 kubelet[1969]: E0927 00:28:19.917037    1969 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="6c8855e3-d51e-4c1a-bb6c-655767890d4a"
	Sep 27 00:28:25 addons-921129 kubelet[1969]: E0927 00:28:25.917025    1969 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="4e40928b-b56c-4bce-8a6c-13ca99506e7a"
	Sep 27 00:28:31 addons-921129 kubelet[1969]: I0927 00:28:31.764836    1969 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4e40928b-b56c-4bce-8a6c-13ca99506e7a-gcp-creds\") pod \"4e40928b-b56c-4bce-8a6c-13ca99506e7a\" (UID: \"4e40928b-b56c-4bce-8a6c-13ca99506e7a\") "
	Sep 27 00:28:31 addons-921129 kubelet[1969]: I0927 00:28:31.765517    1969 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c2k5\" (UniqueName: \"kubernetes.io/projected/4e40928b-b56c-4bce-8a6c-13ca99506e7a-kube-api-access-6c2k5\") pod \"4e40928b-b56c-4bce-8a6c-13ca99506e7a\" (UID: \"4e40928b-b56c-4bce-8a6c-13ca99506e7a\") "
	Sep 27 00:28:31 addons-921129 kubelet[1969]: I0927 00:28:31.765983    1969 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e40928b-b56c-4bce-8a6c-13ca99506e7a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4e40928b-b56c-4bce-8a6c-13ca99506e7a" (UID: "4e40928b-b56c-4bce-8a6c-13ca99506e7a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:28:31 addons-921129 kubelet[1969]: I0927 00:28:31.772152    1969 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e40928b-b56c-4bce-8a6c-13ca99506e7a-kube-api-access-6c2k5" (OuterVolumeSpecName: "kube-api-access-6c2k5") pod "4e40928b-b56c-4bce-8a6c-13ca99506e7a" (UID: "4e40928b-b56c-4bce-8a6c-13ca99506e7a"). InnerVolumeSpecName "kube-api-access-6c2k5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:31 addons-921129 kubelet[1969]: I0927 00:28:31.866510    1969 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4e40928b-b56c-4bce-8a6c-13ca99506e7a-gcp-creds\") on node \"addons-921129\" DevicePath \"\""
	Sep 27 00:28:31 addons-921129 kubelet[1969]: I0927 00:28:31.866549    1969 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6c2k5\" (UniqueName: \"kubernetes.io/projected/4e40928b-b56c-4bce-8a6c-13ca99506e7a-kube-api-access-6c2k5\") on node \"addons-921129\" DevicePath \"\""
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.472286    1969 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chp6n\" (UniqueName: \"kubernetes.io/projected/6f46ca63-ee6e-40a2-847d-0027eb2fd753-kube-api-access-chp6n\") pod \"6f46ca63-ee6e-40a2-847d-0027eb2fd753\" (UID: \"6f46ca63-ee6e-40a2-847d-0027eb2fd753\") "
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.474297    1969 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f46ca63-ee6e-40a2-847d-0027eb2fd753-kube-api-access-chp6n" (OuterVolumeSpecName: "kube-api-access-chp6n") pod "6f46ca63-ee6e-40a2-847d-0027eb2fd753" (UID: "6f46ca63-ee6e-40a2-847d-0027eb2fd753"). InnerVolumeSpecName "kube-api-access-chp6n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.573843    1969 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2qs\" (UniqueName: \"kubernetes.io/projected/a0e1613c-f205-4152-b2b1-2310f2f418b0-kube-api-access-8f2qs\") pod \"a0e1613c-f205-4152-b2b1-2310f2f418b0\" (UID: \"a0e1613c-f205-4152-b2b1-2310f2f418b0\") "
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.574049    1969 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-chp6n\" (UniqueName: \"kubernetes.io/projected/6f46ca63-ee6e-40a2-847d-0027eb2fd753-kube-api-access-chp6n\") on node \"addons-921129\" DevicePath \"\""
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.576946    1969 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e1613c-f205-4152-b2b1-2310f2f418b0-kube-api-access-8f2qs" (OuterVolumeSpecName: "kube-api-access-8f2qs") pod "a0e1613c-f205-4152-b2b1-2310f2f418b0" (UID: "a0e1613c-f205-4152-b2b1-2310f2f418b0"). InnerVolumeSpecName "kube-api-access-8f2qs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.675077    1969 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8f2qs\" (UniqueName: \"kubernetes.io/projected/a0e1613c-f205-4152-b2b1-2310f2f418b0-kube-api-access-8f2qs\") on node \"addons-921129\" DevicePath \"\""
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.789480    1969 scope.go:117] "RemoveContainer" containerID="ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.850082    1969 scope.go:117] "RemoveContainer" containerID="ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: E0927 00:28:32.853652    1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f" containerID="ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.853775    1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f"} err="failed to get container status \"ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f\": rpc error: code = Unknown desc = Error response from daemon: No such container: ca26ee74221fdf061c6be71e29e14d4b7ecaa0f2c0f4ba12cd1189116bbf136f"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.853831    1969 scope.go:117] "RemoveContainer" containerID="ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.889682    1969 scope.go:117] "RemoveContainer" containerID="ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: E0927 00:28:32.890472    1969 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5" containerID="ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5"
	Sep 27 00:28:32 addons-921129 kubelet[1969]: I0927 00:28:32.890503    1969 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5"} err="failed to get container status \"ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5\": rpc error: code = Unknown desc = Error response from daemon: No such container: ecdb15fbd0a186b19142ce9d27845be5f463b3a7f30398b119596ae6823dfbc5"
	
	
	==> storage-provisioner [c1d3752fb178] <==
	I0927 00:15:55.168628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:15:55.185042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:15:55.185145       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:15:55.197290       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:15:55.197439       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-921129_4f2f7e26-8a1d-4f78-90c7-40ddf6de7171!
	I0927 00:15:55.199604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2082e19c-4ce7-4593-9ec3-dafc6fa796cd", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-921129_4f2f7e26-8a1d-4f78-90c7-40ddf6de7171 became leader
	I0927 00:15:55.297993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-921129_4f2f7e26-8a1d-4f78-90c7-40ddf6de7171!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-921129 -n addons-921129
helpers_test.go:261: (dbg) Run:  kubectl --context addons-921129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-921129 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-921129 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-921129/192.168.39.24
	Start Time:       Fri, 27 Sep 2024 00:19:17 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rmzq6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rmzq6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-921129
	  Normal   Pulling    7m42s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.78s)

                                                
                                    

Test pass (308/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.95
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 3.47
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 150.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 223.17
29 TestAddons/serial/Volcano 42.88
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 18.08
35 TestAddons/parallel/InspektorGadget 10.83
36 TestAddons/parallel/MetricsServer 6.71
38 TestAddons/parallel/CSI 50.68
39 TestAddons/parallel/Headlamp 19.49
40 TestAddons/parallel/CloudSpanner 6.69
41 TestAddons/parallel/LocalPath 55.11
42 TestAddons/parallel/NvidiaDevicePlugin 6.44
43 TestAddons/parallel/Yakd 11.71
44 TestAddons/StoppedEnableDisable 8.59
45 TestCertOptions 90.33
46 TestCertExpiration 345.46
47 TestDockerFlags 101.17
48 TestForceSystemdFlag 131.25
49 TestForceSystemdEnv 82.31
51 TestKVMDriverInstallOrUpdate 6.12
55 TestErrorSpam/setup 50.22
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.76
58 TestErrorSpam/pause 1.25
59 TestErrorSpam/unpause 1.47
60 TestErrorSpam/stop 7.27
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 63.39
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 42.14
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.35
72 TestFunctional/serial/CacheCmd/cache/add_local 1.3
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.17
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 40.46
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.05
83 TestFunctional/serial/LogsFileCmd 1.07
84 TestFunctional/serial/InvalidService 4.26
86 TestFunctional/parallel/ConfigCmd 0.35
87 TestFunctional/parallel/DashboardCmd 34.88
88 TestFunctional/parallel/DryRun 0.28
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 0.92
94 TestFunctional/parallel/ServiceCmdConnect 8.64
95 TestFunctional/parallel/AddonsCmd 0.11
96 TestFunctional/parallel/PersistentVolumeClaim 48
98 TestFunctional/parallel/SSHCmd 0.47
99 TestFunctional/parallel/CpCmd 1.39
100 TestFunctional/parallel/MySQL 31.35
101 TestFunctional/parallel/FileSync 0.19
102 TestFunctional/parallel/CertSync 1.4
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
110 TestFunctional/parallel/License 0.17
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.77
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.11
119 TestFunctional/parallel/ImageCommands/Setup 1.57
120 TestFunctional/parallel/DockerEnv/bash 0.91
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.26
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
136 TestFunctional/parallel/ProfileCmd/profile_list 0.39
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
139 TestFunctional/parallel/MountCmd/any-port 7.85
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
144 TestFunctional/parallel/ServiceCmd/List 0.33
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
147 TestFunctional/parallel/ServiceCmd/Format 0.34
148 TestFunctional/parallel/MountCmd/specific-port 1.8
149 TestFunctional/parallel/ServiceCmd/URL 0.31
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
154 TestGvisorAddon 235.76
157 TestMultiControlPlane/serial/StartCluster 222.72
158 TestMultiControlPlane/serial/DeployApp 5.86
159 TestMultiControlPlane/serial/PingHostFromPods 1.32
160 TestMultiControlPlane/serial/AddWorkerNode 63.68
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
163 TestMultiControlPlane/serial/CopyFile 13.16
164 TestMultiControlPlane/serial/StopSecondaryNode 13.95
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
166 TestMultiControlPlane/serial/RestartSecondaryNode 38.93
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 258.43
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.38
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
171 TestMultiControlPlane/serial/StopCluster 38.27
172 TestMultiControlPlane/serial/RestartCluster 128.33
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
174 TestMultiControlPlane/serial/AddSecondaryNode 80.4
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
178 TestImageBuild/serial/Setup 51.99
179 TestImageBuild/serial/NormalBuild 2.31
180 TestImageBuild/serial/BuildWithBuildArg 1.55
181 TestImageBuild/serial/BuildWithDockerIgnore 1.09
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.98
186 TestJSONOutput/start/Command 97.72
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.59
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.55
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 7.56
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.2
214 TestMainNoArgs 0.04
215 TestMinikubeProfile 103.82
218 TestMountStart/serial/StartWithMountFirst 29.31
219 TestMountStart/serial/VerifyMountFirst 0.4
220 TestMountStart/serial/StartWithMountSecond 31.05
221 TestMountStart/serial/VerifyMountSecond 0.39
222 TestMountStart/serial/DeleteFirst 0.69
223 TestMountStart/serial/VerifyMountPostDelete 0.38
224 TestMountStart/serial/Stop 2.28
225 TestMountStart/serial/RestartStopped 24.98
226 TestMountStart/serial/VerifyMountPostStop 0.38
229 TestMultiNode/serial/FreshStart2Nodes 133.11
230 TestMultiNode/serial/DeployApp2Nodes 4.21
231 TestMultiNode/serial/PingHostFrom2Pods 0.81
232 TestMultiNode/serial/AddNode 58.05
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.61
235 TestMultiNode/serial/CopyFile 7.5
236 TestMultiNode/serial/StopNode 3.38
237 TestMultiNode/serial/StartAfterStop 42.74
238 TestMultiNode/serial/RestartKeepsNodes 305.3
239 TestMultiNode/serial/DeleteNode 2.27
240 TestMultiNode/serial/StopMultiNode 25.87
241 TestMultiNode/serial/RestartMultiNode 119.69
242 TestMultiNode/serial/ValidateNameConflict 54.66
247 TestPreload 185.17
249 TestScheduledStopUnix 121.47
250 TestSkaffold 136.26
253 TestRunningBinaryUpgrade 119.56
255 TestKubernetesUpgrade 243.1
264 TestStoppedBinaryUpgrade/Setup 0.63
265 TestStoppedBinaryUpgrade/Upgrade 154.63
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
268 TestNoKubernetes/serial/StartWithK8s 74.81
269 TestNoKubernetes/serial/StartWithStopK8s 18
270 TestNoKubernetes/serial/Start 35.61
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
273 TestPause/serial/Start 118.31
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
286 TestNoKubernetes/serial/ProfileList 0.99
287 TestNoKubernetes/serial/Stop 2.29
288 TestNoKubernetes/serial/StartNoArgs 65.57
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
291 TestStartStop/group/old-k8s-version/serial/FirstStart 132.96
292 TestPause/serial/SecondStartNoReconfiguration 60.37
294 TestStartStop/group/no-preload/serial/FirstStart 77.29
295 TestPause/serial/Pause 0.72
296 TestPause/serial/VerifyStatus 0.31
297 TestPause/serial/Unpause 0.68
298 TestPause/serial/PauseAgain 0.93
299 TestPause/serial/DeletePaused 1.41
300 TestPause/serial/VerifyDeletedResources 0.72
302 TestStartStop/group/embed-certs/serial/FirstStart 74.64
303 TestStartStop/group/no-preload/serial/DeployApp 10.39
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.42
305 TestStartStop/group/no-preload/serial/Stop 13.36
306 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
308 TestStartStop/group/old-k8s-version/serial/Stop 13.42
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
310 TestStartStop/group/no-preload/serial/SecondStart 298.87
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
312 TestStartStop/group/old-k8s-version/serial/SecondStart 537.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.95
315 TestStartStop/group/embed-certs/serial/DeployApp 9.34
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
317 TestStartStop/group/embed-certs/serial/Stop 13.37
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/embed-certs/serial/SecondStart 353.71
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 313.98
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
328 TestStartStop/group/no-preload/serial/Pause 2.55
330 TestStartStop/group/newest-cni/serial/FirstStart 60.94
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
333 TestStartStop/group/newest-cni/serial/Stop 8.34
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
335 TestStartStop/group/newest-cni/serial/SecondStart 41.95
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
339 TestStartStop/group/embed-certs/serial/Pause 2.53
340 TestNetworkPlugins/group/auto/Start 100.26
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
344 TestStartStop/group/newest-cni/serial/Pause 2.39
345 TestNetworkPlugins/group/kindnet/Start 92.94
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.64
350 TestNetworkPlugins/group/calico/Start 93.86
351 TestNetworkPlugins/group/auto/KubeletFlags 0.29
352 TestNetworkPlugins/group/auto/NetCatPod 11.24
353 TestNetworkPlugins/group/auto/DNS 0.18
354 TestNetworkPlugins/group/auto/Localhost 0.16
355 TestNetworkPlugins/group/auto/HairPin 0.16
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.39
359 TestNetworkPlugins/group/custom-flannel/Start 71.34
360 TestNetworkPlugins/group/kindnet/DNS 0.2
361 TestNetworkPlugins/group/kindnet/Localhost 0.17
362 TestNetworkPlugins/group/kindnet/HairPin 0.15
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
365 TestNetworkPlugins/group/false/Start 107.01
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
368 TestStartStop/group/old-k8s-version/serial/Pause 2.54
369 TestNetworkPlugins/group/calico/KubeletFlags 0.25
370 TestNetworkPlugins/group/enable-default-cni/Start 120.52
371 TestNetworkPlugins/group/calico/NetCatPod 12.26
372 TestNetworkPlugins/group/calico/DNS 0.18
373 TestNetworkPlugins/group/calico/Localhost 0.14
374 TestNetworkPlugins/group/calico/HairPin 0.15
375 TestNetworkPlugins/group/flannel/Start 103.25
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
378 TestNetworkPlugins/group/custom-flannel/DNS 0.18
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
381 TestNetworkPlugins/group/bridge/Start 76.63
382 TestNetworkPlugins/group/false/KubeletFlags 0.22
383 TestNetworkPlugins/group/false/NetCatPod 11.28
384 TestNetworkPlugins/group/false/DNS 0.18
385 TestNetworkPlugins/group/false/Localhost 0.16
386 TestNetworkPlugins/group/false/HairPin 0.16
387 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
388 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.13
389 TestNetworkPlugins/group/kubenet/Start 92.85
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
394 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
395 TestNetworkPlugins/group/flannel/NetCatPod 12.26
396 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
397 TestNetworkPlugins/group/bridge/NetCatPod 11.28
398 TestNetworkPlugins/group/flannel/DNS 0.19
399 TestNetworkPlugins/group/flannel/Localhost 0.16
400 TestNetworkPlugins/group/flannel/HairPin 0.17
401 TestNetworkPlugins/group/bridge/DNS 0.21
402 TestNetworkPlugins/group/bridge/Localhost 0.15
403 TestNetworkPlugins/group/bridge/HairPin 0.17
404 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
405 TestNetworkPlugins/group/kubenet/NetCatPod 11.26
406 TestNetworkPlugins/group/kubenet/DNS 0.17
407 TestNetworkPlugins/group/kubenet/Localhost 0.13
408 TestNetworkPlugins/group/kubenet/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (6.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-927319 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-927319 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (6.94480687s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 00:14:46.676055   22114 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0927 00:14:46.676160   22114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-927319
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-927319: exit status 85 (58.384594ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-927319 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |          |
	|         | -p download-only-927319        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:39.769652   22126 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:39.769917   22126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:39.769926   22126 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:39.769931   22126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:39.770159   22126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	W0927 00:14:39.770320   22126 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-14912/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-14912/.minikube/config/config.json: no such file or directory
	I0927 00:14:39.771011   22126 out.go:352] Setting JSON to true
	I0927 00:14:39.771947   22126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3431,"bootTime":1727392649,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:39.772056   22126 start.go:139] virtualization: kvm guest
	I0927 00:14:39.774573   22126 out.go:97] [download-only-927319] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 00:14:39.774708   22126 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:14:39.774778   22126 notify.go:220] Checking for updates...
	I0927 00:14:39.776005   22126 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:39.777629   22126 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:39.779289   22126 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	I0927 00:14:39.780855   22126 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:14:39.782377   22126 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 00:14:39.785024   22126 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:14:39.785284   22126 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:14:39.890923   22126 out.go:97] Using the kvm2 driver based on user configuration
	I0927 00:14:39.890959   22126 start.go:297] selected driver: kvm2
	I0927 00:14:39.890966   22126 start.go:901] validating driver "kvm2" against <nil>
	I0927 00:14:39.891353   22126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:39.891498   22126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19711-14912/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 00:14:39.908068   22126 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 00:14:39.908137   22126 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:14:39.908679   22126 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0927 00:14:39.908834   22126 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:14:39.908863   22126 cni.go:84] Creating CNI manager for ""
	I0927 00:14:39.908909   22126 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0927 00:14:39.908963   22126 start.go:340] cluster config:
	{Name:download-only-927319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-927319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:14:39.909151   22126 iso.go:125] acquiring lock: {Name:mkb5ac60d416b321ea42aa90cf43a9e41df90177 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:14:39.911509   22126 out.go:97] Downloading VM boot image ...
	I0927 00:14:39.911571   22126 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19711-14912/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 00:14:42.821197   22126 out.go:97] Starting "download-only-927319" primary control-plane node in "download-only-927319" cluster
	I0927 00:14:42.821235   22126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 00:14:42.845214   22126 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0927 00:14:42.845244   22126 cache.go:56] Caching tarball of preloaded images
	I0927 00:14:42.845427   22126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0927 00:14:42.847255   22126 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 00:14:42.847290   22126 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0927 00:14:42.877673   22126 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-927319 host does not exist
	  To start a cluster, run: "minikube start -p download-only-927319"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-927319
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-517789 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-517789 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (3.468908379s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 00:14:50.473856   22114 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0927 00:14:50.473908   22114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-14912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-517789
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-517789: exit status 85 (60.81206ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-927319 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-927319        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p download-only-927319        | download-only-927319 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -o=json --download-only        | download-only-517789 | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p download-only-517789        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:47
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:47.043583   22328 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:47.043687   22328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:47.043695   22328 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:47.043699   22328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:47.043877   22328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:14:47.044444   22328 out.go:352] Setting JSON to true
	I0927 00:14:47.045270   22328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3438,"bootTime":1727392649,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:47.045370   22328 start.go:139] virtualization: kvm guest
	I0927 00:14:47.047450   22328 out.go:97] [download-only-517789] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:14:47.047559   22328 notify.go:220] Checking for updates...
	I0927 00:14:47.048898   22328 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:47.050254   22328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:47.051591   22328 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	I0927 00:14:47.052798   22328 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:14:47.053974   22328 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-517789 host does not exist
	  To start a cluster, run: "minikube start -p download-only-517789"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-517789
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:14:51.061776   22114 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-270882 --alsologtostderr --binary-mirror http://127.0.0.1:42777 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-270882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-270882
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (150.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-930906 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-930906 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m29.405243664s)
helpers_test.go:175: Cleaning up "offline-docker-930906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-930906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-930906: (1.015470052s)
--- PASS: TestOffline (150.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-921129
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-921129: exit status 85 (53.668136ms)

                                                
                                                
-- stdout --
	* Profile "addons-921129" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-921129"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-921129
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-921129: exit status 85 (52.31242ms)

                                                
                                                
-- stdout --
	* Profile "addons-921129" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-921129"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (223.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-921129 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-921129 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns: (3m43.174242858s)
--- PASS: TestAddons/Setup (223.17s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.88s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 22.391661ms
addons_test.go:843: volcano-admission stabilized in 22.473147ms
addons_test.go:835: volcano-scheduler stabilized in 22.519576ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-wz8bt" [da9277c9-85fe-4684-9c7a-b5bd5d8fa3ee] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004972485s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-lf4jb" [88b997e5-10d1-4ffd-bded-2ac0a6427eb9] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005007345s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-xn4gm" [1bb36c70-0d10-490c-9788-a62ac1fe10eb] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003885589s
addons_test.go:870: (dbg) Run:  kubectl --context addons-921129 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-921129 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-921129 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3a35e0c8-4b2b-469d-8dbe-d0bd1f84099a] Pending
helpers_test.go:344: "test-job-nginx-0" [3a35e0c8-4b2b-469d-8dbe-d0bd1f84099a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3a35e0c8-4b2b-469d-8dbe-d0bd1f84099a] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004262772s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 addons disable volcano --alsologtostderr -v=1: (11.327592639s)
--- PASS: TestAddons/serial/Volcano (42.88s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-921129 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-921129 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-921129 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-921129 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-921129 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [73386c1b-2718-41aa-b660-5326b6035486] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [73386c1b-2718-41aa-b660-5326b6035486] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.01847747s
I0927 00:27:46.419113   22114 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-921129 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.24
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 addons disable ingress --alsologtostderr -v=1: (7.785941905s)
--- PASS: TestAddons/parallel/Ingress (18.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l9c62" [319194b9-6deb-44ad-9410-7e07c09a9163] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006713041s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-921129
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-921129: (5.823777109s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.95296ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-t2mww" [ce1f5642-6ded-410b-b64d-44b41bb0286e] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004242198s
addons_test.go:413: (dbg) Run:  kubectl --context addons-921129 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 00:27:20.373402   22114 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.418781ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-921129 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-921129 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ecb2f80b-f027-407f-87c2-d06a05705a7f] Pending
helpers_test.go:344: "task-pv-pod" [ecb2f80b-f027-407f-87c2-d06a05705a7f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ecb2f80b-f027-407f-87c2-d06a05705a7f] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004115765s
addons_test.go:528: (dbg) Run:  kubectl --context addons-921129 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-921129 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-921129 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-921129 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-921129 delete pod task-pv-pod: (1.104740762s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-921129 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-921129 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-921129 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [89e792ec-cb14-4029-812a-7067c7ec3776] Pending
helpers_test.go:344: "task-pv-pod-restore" [89e792ec-cb14-4029-812a-7067c7ec3776] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [89e792ec-cb14-4029-812a-7067c7ec3776] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004427315s
addons_test.go:570: (dbg) Run:  kubectl --context addons-921129 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-921129 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-921129 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.825674145s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-921129 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-cqlpj" [36d0939d-0097-4f38-8361-53dcd8c3fb53] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-cqlpj" [36d0939d-0097-4f38-8361-53dcd8c3fb53] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-cqlpj" [36d0939d-0097-4f38-8361-53dcd8c3fb53] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004506532s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 addons disable headlamp --alsologtostderr -v=1: (5.660377236s)
--- PASS: TestAddons/parallel/Headlamp (19.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-2rsdz" [18f49d53-9b8e-457d-9de4-0c77787d3404] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006379677s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-921129
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-921129 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-921129 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8ef3d2c1-9299-40bc-88e7-7210af2b1300] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8ef3d2c1-9299-40bc-88e7-7210af2b1300] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8ef3d2c1-9299-40bc-88e7-7210af2b1300] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003458617s
addons_test.go:938: (dbg) Run:  kubectl --context addons-921129 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 ssh "cat /opt/local-path-provisioner/pvc-8efc34d2-173d-43c8-a797-ed9149a8a1e5_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-921129 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-921129 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.298069356s)
--- PASS: TestAddons/parallel/LocalPath (55.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-49wpz" [1061b6c1-f508-4d17-a693-c6231549ad3b] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004570835s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-921129
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-nwbqk" [7ad4a365-f49e-4c7c-aee4-b4efba92030f] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004093252s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-921129 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-921129 addons disable yakd --alsologtostderr -v=1: (5.706036188s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-921129
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-921129: (8.303968467s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-921129
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-921129
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-921129
--- PASS: TestAddons/StoppedEnableDisable (8.59s)

                                                
                                    
x
+
TestCertOptions (90.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-914684 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-914684 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m27.984007302s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-914684 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-914684 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-914684 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-914684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-914684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-914684: (1.654023114s)
--- PASS: TestCertOptions (90.33s)

                                                
                                    
x
+
TestCertExpiration (345.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-276339 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-276339 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m11.286392331s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-276339 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-276339 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m33.088256094s)
helpers_test.go:175: Cleaning up "cert-expiration-276339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-276339
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-276339: (1.083996553s)
--- PASS: TestCertExpiration (345.46s)

                                                
                                    
x
+
TestDockerFlags (101.17s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-175411 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-175411 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m39.671630006s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-175411 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-175411 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-175411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-175411
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-175411: (1.060829127s)
--- PASS: TestDockerFlags (101.17s)

                                                
                                    
x
+
TestForceSystemdFlag (131.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-584662 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-584662 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (2m9.938250376s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-584662 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-584662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-584662
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-584662: (1.03047473s)
--- PASS: TestForceSystemdFlag (131.25s)

                                                
                                    
x
+
TestForceSystemdEnv (82.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-103391 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
I0927 01:14:15.214294   22114 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 01:14:15.214418   22114 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0927 01:14:15.253429   22114 install.go:62] docker-machine-driver-kvm2: exit status 1
W0927 01:14:15.253729   22114 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 01:14:15.253793   22114 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate843084277/001/docker-machine-driver-kvm2
I0927 01:14:15.694237   22114 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate843084277/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc0003f2500 gz:0xc0003f2508 tar:0xc0003f2490 tar.bz2:0xc0003f24c0 tar.gz:0xc0003f24d0 tar.xz:0xc0003f24e0 tar.zst:0xc0003f24f0 tbz2:0xc0003f24c0 tgz:0xc0003f24d0 txz:0xc0003f24e0 tzst:0xc0003f24f0 xz:0xc0003f2510 zip:0xc0003f2520 zst:0xc0003f2518] Getters:map[file:0xc001e6c1c0 http:0xc0004ca410 https:0xc0004ca4b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0927 01:14:15.694286   22114 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate843084277/001/docker-machine-driver-kvm2
I0927 01:14:18.574598   22114 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 01:14:18.626852   22114 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0927 01:14:18.661811   22114 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0927 01:14:18.661852   22114 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0927 01:14:18.661926   22114 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 01:14:18.661955   22114 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate843084277/002/docker-machine-driver-kvm2
I0927 01:14:19.010372   22114 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate843084277/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640 0x466f640] Decompressors:map[bz2:0xc0003f2500 gz:0xc0003f2508 tar:0xc0003f2490 tar.bz2:0xc0003f24c0 tar.gz:0xc0003f24d0 tar.xz:0xc0003f24e0 tar.zst:0xc0003f24f0 tbz2:0xc0003f24c0 tgz:0xc0003f24d0 txz:0xc0003f24e0 tzst:0xc0003f24f0 xz:0xc0003f2510 zip:0xc0003f2520 zst:0xc0003f2518] Getters:map[file:0xc0006aa1e0 http:0xc0005c44b0 https:0xc0005c4500] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0927 01:14:19.010415   22114 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate843084277/002/docker-machine-driver-kvm2
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-103391 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m21.055529468s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-103391 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-103391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-103391
--- PASS: TestForceSystemdEnv (82.31s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.12s)

                                                
                                    
x
+
TestErrorSpam/setup (50.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-856458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-856458 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-856458 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-856458 --driver=kvm2 : (50.217114804s)
--- PASS: TestErrorSpam/setup (50.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (7.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 stop: (3.560231902s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 stop: (1.861814298s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-856458 --log_dir /tmp/nospam-856458 stop: (1.849101927s)
--- PASS: TestErrorSpam/stop (7.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-14912/.minikube/files/etc/test/nested/copy/22114/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471370 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-471370 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m3.389115066s)
--- PASS: TestFunctional/serial/StartWithProxy (63.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:31:05.563687   22114 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471370 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-471370 --alsologtostderr -v=8: (42.1397811s)
functional_test.go:663: soft start took 42.14039142s for "functional-471370" cluster.
I0927 00:31:47.703832   22114 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (42.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-471370 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-471370 /tmp/TestFunctionalserialCacheCmdcacheadd_local1770595095/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cache add minikube-local-cache-test:functional-471370
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cache delete minikube-local-cache-test:functional-471370
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-471370
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.784631ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 kubectl -- --context functional-471370 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-471370 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471370 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-471370 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.459539135s)
functional_test.go:761: restart took 40.459653815s for "functional-471370" cluster.
I0927 00:32:33.724922   22114 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-471370 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-471370 logs: (1.045212268s)
--- PASS: TestFunctional/serial/LogsCmd (1.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 logs --file /tmp/TestFunctionalserialLogsFileCmd3422301622/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-471370 logs --file /tmp/TestFunctionalserialLogsFileCmd3422301622/001/logs.txt: (1.067717576s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-471370 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-471370
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-471370: exit status 115 (286.099097ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.126:32158 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-471370 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 config get cpus: exit status 14 (66.427557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 config get cpus: exit status 14 (47.596145ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (34.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-471370 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-471370 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 32570: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (34.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471370 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-471370 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (137.559023ms)

                                                
                                                
-- stdout --
	* [functional-471370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:32:48.083251   31838 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:32:48.083362   31838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:32:48.083373   31838 out.go:358] Setting ErrFile to fd 2...
	I0927 00:32:48.083379   31838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:32:48.083578   31838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:32:48.084166   31838 out.go:352] Setting JSON to false
	I0927 00:32:48.085168   31838 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4519,"bootTime":1727392649,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:32:48.085264   31838 start.go:139] virtualization: kvm guest
	I0927 00:32:48.087639   31838 out.go:177] * [functional-471370] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:32:48.088999   31838 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:32:48.089025   31838 notify.go:220] Checking for updates...
	I0927 00:32:48.091725   31838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:32:48.093180   31838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	I0927 00:32:48.094635   31838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:32:48.095935   31838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:32:48.097679   31838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:32:48.099317   31838 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:32:48.099736   31838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:32:48.099801   31838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:32:48.115313   31838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37583
	I0927 00:32:48.115756   31838 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:32:48.116314   31838 main.go:141] libmachine: Using API Version  1
	I0927 00:32:48.116357   31838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:32:48.116758   31838 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:32:48.116953   31838 main.go:141] libmachine: (functional-471370) Calling .DriverName
	I0927 00:32:48.117229   31838 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:32:48.117544   31838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:32:48.117585   31838 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:32:48.132991   31838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0927 00:32:48.133518   31838 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:32:48.134000   31838 main.go:141] libmachine: Using API Version  1
	I0927 00:32:48.134026   31838 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:32:48.134385   31838 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:32:48.134568   31838 main.go:141] libmachine: (functional-471370) Calling .DriverName
	I0927 00:32:48.169798   31838 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 00:32:48.171228   31838 start.go:297] selected driver: kvm2
	I0927 00:32:48.171251   31838 start.go:901] validating driver "kvm2" against &{Name:functional-471370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-471370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.126 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:32:48.171402   31838 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:32:48.173729   31838 out.go:201] 
	W0927 00:32:48.175049   31838 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:32:48.176149   31838 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471370 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471370 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-471370 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (174.54561ms)

                                                
                                                
-- stdout --
	* [functional-471370] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:32:48.374203   31893 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:32:48.374347   31893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:32:48.374360   31893 out.go:358] Setting ErrFile to fd 2...
	I0927 00:32:48.374367   31893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:32:48.374768   31893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:32:48.375595   31893 out.go:352] Setting JSON to false
	I0927 00:32:48.376998   31893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4519,"bootTime":1727392649,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:32:48.377145   31893 start.go:139] virtualization: kvm guest
	I0927 00:32:48.379577   31893 out.go:177] * [functional-471370] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0927 00:32:48.381028   31893 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:32:48.381047   31893 notify.go:220] Checking for updates...
	I0927 00:32:48.384518   31893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:32:48.386068   31893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	I0927 00:32:48.387797   31893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	I0927 00:32:48.389239   31893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:32:48.390573   31893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:32:48.392277   31893 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:32:48.392742   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:32:48.392799   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:32:48.409263   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0927 00:32:48.409897   31893 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:32:48.410512   31893 main.go:141] libmachine: Using API Version  1
	I0927 00:32:48.410553   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:32:48.410995   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:32:48.411217   31893 main.go:141] libmachine: (functional-471370) Calling .DriverName
	I0927 00:32:48.411473   31893 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:32:48.411764   31893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:32:48.411798   31893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:32:48.427613   31893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
	I0927 00:32:48.428156   31893 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:32:48.428692   31893 main.go:141] libmachine: Using API Version  1
	I0927 00:32:48.428715   31893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:32:48.429091   31893 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:32:48.429283   31893 main.go:141] libmachine: (functional-471370) Calling .DriverName
	I0927 00:32:48.470605   31893 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0927 00:32:48.471641   31893 start.go:297] selected driver: kvm2
	I0927 00:32:48.471658   31893 start.go:901] validating driver "kvm2" against &{Name:functional-471370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-471370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.126 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:32:48.471761   31893 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:32:48.489959   31893 out.go:201] 
	W0927 00:32:48.491707   31893 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:32:48.493769   31893 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-471370 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-471370 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bx9xj" [ecb0cf67-bc7c-4f6b-aab0-aad2e2b230fe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bx9xj" [ecb0cf67-bc7c-4f6b-aab0-aad2e2b230fe] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.077473887s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.126:30500
functional_test.go:1675: http://192.168.39.126:30500: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bx9xj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.126:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.126:30500
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d1a4a421-924f-4663-9ee8-34d5b08d9de1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004410278s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-471370 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-471370 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-471370 get pvc myclaim -o=json
I0927 00:32:47.592418   22114 retry.go:31] will retry after 1.458893233s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:d768d852-8ca6-410f-af47-33152f9d8e14 ResourceVersion:678 Generation:0 CreationTimestamp:2024-09-27 00:32:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc000979010 VolumeMode:0xc000979030 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-471370 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-471370 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fa8f5abf-3692-4dd0-a2df-f7f527178293] Pending
helpers_test.go:344: "sp-pod" [fa8f5abf-3692-4dd0-a2df-f7f527178293] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fa8f5abf-3692-4dd0-a2df-f7f527178293] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003990056s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-471370 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-471370 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-471370 delete -f testdata/storage-provisioner/pod.yaml: (1.636957148s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-471370 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bf75d1bc-0518-4097-b9f9-46d49a8ba288] Pending
helpers_test.go:344: "sp-pod" [bf75d1bc-0518-4097-b9f9-46d49a8ba288] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bf75d1bc-0518-4097-b9f9-46d49a8ba288] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004429034s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-471370 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh -n functional-471370 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cp functional-471370:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3036530805/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh -n functional-471370 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh -n functional-471370 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-471370 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-24xvb" [4bba6cea-9254-42cf-ba01-360da827703f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-24xvb" [4bba6cea-9254-42cf-ba01-360da827703f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.005049242s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;": exit status 1 (327.022133ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:33:22.318323   22114 retry.go:31] will retry after 1.049575163s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;": exit status 1 (161.395494ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:33:23.529665   22114 retry.go:31] will retry after 1.603988957s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;": exit status 1 (140.042649ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:33:25.273984   22114 retry.go:31] will retry after 1.732763098s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-471370 exec mysql-6cdb49bbb-24xvb -- mysql -ppassword -e "show databases;"
2024/09/27 00:33:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (31.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/22114/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /etc/test/nested/copy/22114/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/22114.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /etc/ssl/certs/22114.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/22114.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /usr/share/ca-certificates/22114.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/221142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /etc/ssl/certs/221142.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/221142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /usr/share/ca-certificates/221142.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-471370 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh "sudo systemctl is-active crio": exit status 1 (221.985865ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-471370 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-471370 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-zm6vl" [f8d5d919-d008-4c6a-b835-2af7dd423692] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-zm6vl" [f8d5d919-d008-4c6a-b835-2af7dd423692] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004830827s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471370 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-471370
docker.io/kicbase/echo-server:functional-471370
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471370 image ls --format short --alsologtostderr:
I0927 00:32:58.295758   32787 out.go:345] Setting OutFile to fd 1 ...
I0927 00:32:58.296025   32787 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:32:58.296034   32787 out.go:358] Setting ErrFile to fd 2...
I0927 00:32:58.296038   32787 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:32:58.296207   32787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
I0927 00:32:58.296802   32787 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:32:58.296931   32787 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:32:58.297332   32787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:32:58.297388   32787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:32:58.313163   32787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
I0927 00:32:58.313704   32787 main.go:141] libmachine: () Calling .GetVersion
I0927 00:32:58.314333   32787 main.go:141] libmachine: Using API Version  1
I0927 00:32:58.314350   32787 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:32:58.314739   32787 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:32:58.315040   32787 main.go:141] libmachine: (functional-471370) Calling .GetState
I0927 00:32:58.317215   32787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:32:58.317268   32787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:32:58.333206   32787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
I0927 00:32:58.333791   32787 main.go:141] libmachine: () Calling .GetVersion
I0927 00:32:58.334357   32787 main.go:141] libmachine: Using API Version  1
I0927 00:32:58.334385   32787 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:32:58.334854   32787 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:32:58.335060   32787 main.go:141] libmachine: (functional-471370) Calling .DriverName
I0927 00:32:58.335290   32787 ssh_runner.go:195] Run: systemctl --version
I0927 00:32:58.335313   32787 main.go:141] libmachine: (functional-471370) Calling .GetSSHHostname
I0927 00:32:58.338384   32787 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:32:58.338778   32787 main.go:141] libmachine: (functional-471370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:b9:36", ip: ""} in network mk-functional-471370: {Iface:virbr1 ExpiryTime:2024-09-27 01:30:16 +0000 UTC Type:0 Mac:52:54:00:a1:b9:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-471370 Clientid:01:52:54:00:a1:b9:36}
I0927 00:32:58.338808   32787 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined IP address 192.168.39.126 and MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:32:58.339010   32787 main.go:141] libmachine: (functional-471370) Calling .GetSSHPort
I0927 00:32:58.339223   32787 main.go:141] libmachine: (functional-471370) Calling .GetSSHKeyPath
I0927 00:32:58.339393   32787 main.go:141] libmachine: (functional-471370) Calling .GetSSHUsername
I0927 00:32:58.339539   32787 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/functional-471370/id_rsa Username:docker}
I0927 00:32:58.425428   32787 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0927 00:32:58.451001   32787 main.go:141] libmachine: Making call to close driver server
I0927 00:32:58.451017   32787 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:32:58.451325   32787 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:32:58.451370   32787 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:32:58.451384   32787 main.go:141] libmachine: Making call to close driver server
I0927 00:32:58.451382   32787 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:32:58.451393   32787 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:32:58.451626   32787 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:32:58.451663   32787 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:32:58.451689   32787 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471370 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-471370 | 9419e99f489a9 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| localhost/my-image                          | functional-471370 | e5e5a9e2095e7 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-471370 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471370 image ls --format table --alsologtostderr:
I0927 00:33:03.008243   32950 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:03.008511   32950 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:03.008522   32950 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:03.008527   32950 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:03.008693   32950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
I0927 00:33:03.009253   32950 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:03.009347   32950 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:03.009689   32950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:33:03.009733   32950 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:33:03.025069   32950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43089
I0927 00:33:03.025576   32950 main.go:141] libmachine: () Calling .GetVersion
I0927 00:33:03.026262   32950 main.go:141] libmachine: Using API Version  1
I0927 00:33:03.026290   32950 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:33:03.026679   32950 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:33:03.026948   32950 main.go:141] libmachine: (functional-471370) Calling .GetState
I0927 00:33:03.029079   32950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:33:03.029135   32950 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:33:03.045641   32950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
I0927 00:33:03.046165   32950 main.go:141] libmachine: () Calling .GetVersion
I0927 00:33:03.046686   32950 main.go:141] libmachine: Using API Version  1
I0927 00:33:03.046707   32950 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:33:03.047100   32950 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:33:03.047388   32950 main.go:141] libmachine: (functional-471370) Calling .DriverName
I0927 00:33:03.047622   32950 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:03.047652   32950 main.go:141] libmachine: (functional-471370) Calling .GetSSHHostname
I0927 00:33:03.051421   32950 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:33:03.051921   32950 main.go:141] libmachine: (functional-471370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:b9:36", ip: ""} in network mk-functional-471370: {Iface:virbr1 ExpiryTime:2024-09-27 01:30:16 +0000 UTC Type:0 Mac:52:54:00:a1:b9:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-471370 Clientid:01:52:54:00:a1:b9:36}
I0927 00:33:03.051952   32950 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined IP address 192.168.39.126 and MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:33:03.052312   32950 main.go:141] libmachine: (functional-471370) Calling .GetSSHPort
I0927 00:33:03.052493   32950 main.go:141] libmachine: (functional-471370) Calling .GetSSHKeyPath
I0927 00:33:03.052666   32950 main.go:141] libmachine: (functional-471370) Calling .GetSSHUsername
I0927 00:33:03.052802   32950 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/functional-471370/id_rsa Username:docker}
I0927 00:33:03.134235   32950 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0927 00:33:03.168505   32950 main.go:141] libmachine: Making call to close driver server
I0927 00:33:03.168527   32950 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:33:03.168879   32950 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:33:03.168928   32950 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:33:03.168936   32950 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:33:03.168949   32950 main.go:141] libmachine: Making call to close driver server
I0927 00:33:03.168963   32950 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:33:03.169227   32950 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:33:03.169259   32950 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:33:03.169272   32950 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471370 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","
repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-471370"],"size":"4940000"},{"id":"e5e5a9e2095e70cdf8a6a77d7ef4c8e4ce062febd8c8b69aeb343f28bac96684","repoDigests":[],"repoTags":["localhost/my-image:functional-471370"],"size":"1240000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io
/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"9419e99f489a9b777dc8ccb9f9711e5f0d2900d73539495535ef205749907beb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-471370"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471370 image ls --format json --alsologtostderr:
I0927 00:33:02.800077   32911 out.go:345] Setting OutFile to fd 1 ...
I0927 00:33:02.800198   32911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:02.800210   32911 out.go:358] Setting ErrFile to fd 2...
I0927 00:33:02.800216   32911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:33:02.800501   32911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
I0927 00:33:02.801252   32911 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:02.801363   32911 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:33:02.801783   32911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:33:02.801828   32911 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:33:02.817651   32911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
I0927 00:33:02.818258   32911 main.go:141] libmachine: () Calling .GetVersion
I0927 00:33:02.818945   32911 main.go:141] libmachine: Using API Version  1
I0927 00:33:02.818981   32911 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:33:02.819434   32911 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:33:02.819665   32911 main.go:141] libmachine: (functional-471370) Calling .GetState
I0927 00:33:02.822409   32911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:33:02.822468   32911 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:33:02.838348   32911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43175
I0927 00:33:02.838917   32911 main.go:141] libmachine: () Calling .GetVersion
I0927 00:33:02.839554   32911 main.go:141] libmachine: Using API Version  1
I0927 00:33:02.839581   32911 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:33:02.839989   32911 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:33:02.840181   32911 main.go:141] libmachine: (functional-471370) Calling .DriverName
I0927 00:33:02.840426   32911 ssh_runner.go:195] Run: systemctl --version
I0927 00:33:02.840463   32911 main.go:141] libmachine: (functional-471370) Calling .GetSSHHostname
I0927 00:33:02.843666   32911 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:33:02.844188   32911 main.go:141] libmachine: (functional-471370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:b9:36", ip: ""} in network mk-functional-471370: {Iface:virbr1 ExpiryTime:2024-09-27 01:30:16 +0000 UTC Type:0 Mac:52:54:00:a1:b9:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-471370 Clientid:01:52:54:00:a1:b9:36}
I0927 00:33:02.844236   32911 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined IP address 192.168.39.126 and MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:33:02.844370   32911 main.go:141] libmachine: (functional-471370) Calling .GetSSHPort
I0927 00:33:02.844549   32911 main.go:141] libmachine: (functional-471370) Calling .GetSSHKeyPath
I0927 00:33:02.844663   32911 main.go:141] libmachine: (functional-471370) Calling .GetSSHUsername
I0927 00:33:02.844750   32911 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/functional-471370/id_rsa Username:docker}
I0927 00:33:02.926608   32911 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0927 00:33:02.955397   32911 main.go:141] libmachine: Making call to close driver server
I0927 00:33:02.955408   32911 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:33:02.955737   32911 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:33:02.955761   32911 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:33:02.955764   32911 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:33:02.955784   32911 main.go:141] libmachine: Making call to close driver server
I0927 00:33:02.955793   32911 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:33:02.955991   32911 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:33:02.956006   32911 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:33:02.956018   32911 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471370 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 9419e99f489a9b777dc8ccb9f9711e5f0d2900d73539495535ef205749907beb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-471370
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-471370
size: "4940000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471370 image ls --format yaml --alsologtostderr:
I0927 00:32:58.499823   32811 out.go:345] Setting OutFile to fd 1 ...
I0927 00:32:58.499944   32811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:32:58.499951   32811 out.go:358] Setting ErrFile to fd 2...
I0927 00:32:58.499959   32811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:32:58.500176   32811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
I0927 00:32:58.500906   32811 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:32:58.501036   32811 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:32:58.501418   32811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:32:58.501464   32811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:32:58.517217   32811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
I0927 00:32:58.517754   32811 main.go:141] libmachine: () Calling .GetVersion
I0927 00:32:58.518384   32811 main.go:141] libmachine: Using API Version  1
I0927 00:32:58.518407   32811 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:32:58.518767   32811 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:32:58.518965   32811 main.go:141] libmachine: (functional-471370) Calling .GetState
I0927 00:32:58.521265   32811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:32:58.521321   32811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:32:58.536579   32811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
I0927 00:32:58.537092   32811 main.go:141] libmachine: () Calling .GetVersion
I0927 00:32:58.537595   32811 main.go:141] libmachine: Using API Version  1
I0927 00:32:58.537618   32811 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:32:58.538033   32811 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:32:58.538245   32811 main.go:141] libmachine: (functional-471370) Calling .DriverName
I0927 00:32:58.538468   32811 ssh_runner.go:195] Run: systemctl --version
I0927 00:32:58.538501   32811 main.go:141] libmachine: (functional-471370) Calling .GetSSHHostname
I0927 00:32:58.541543   32811 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:32:58.542053   32811 main.go:141] libmachine: (functional-471370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:b9:36", ip: ""} in network mk-functional-471370: {Iface:virbr1 ExpiryTime:2024-09-27 01:30:16 +0000 UTC Type:0 Mac:52:54:00:a1:b9:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-471370 Clientid:01:52:54:00:a1:b9:36}
I0927 00:32:58.542087   32811 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined IP address 192.168.39.126 and MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:32:58.542267   32811 main.go:141] libmachine: (functional-471370) Calling .GetSSHPort
I0927 00:32:58.542478   32811 main.go:141] libmachine: (functional-471370) Calling .GetSSHKeyPath
I0927 00:32:58.542632   32811 main.go:141] libmachine: (functional-471370) Calling .GetSSHUsername
I0927 00:32:58.542777   32811 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/functional-471370/id_rsa Username:docker}
I0927 00:32:58.621344   32811 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0927 00:32:58.643070   32811 main.go:141] libmachine: Making call to close driver server
I0927 00:32:58.643085   32811 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:32:58.643454   32811 main.go:141] libmachine: (functional-471370) DBG | Closing plugin on server side
I0927 00:32:58.643470   32811 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:32:58.643485   32811 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:32:58.643499   32811 main.go:141] libmachine: Making call to close driver server
I0927 00:32:58.643507   32811 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:32:58.643714   32811 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:32:58.643728   32811 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh pgrep buildkitd: exit status 1 (196.009524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image build -t localhost/my-image:functional-471370 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-471370 image build -t localhost/my-image:functional-471370 testdata/build --alsologtostderr: (3.693038212s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471370 image build -t localhost/my-image:functional-471370 testdata/build --alsologtostderr:
I0927 00:32:58.891537   32863 out.go:345] Setting OutFile to fd 1 ...
I0927 00:32:58.891684   32863 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:32:58.891692   32863 out.go:358] Setting ErrFile to fd 2...
I0927 00:32:58.891696   32863 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:32:58.891923   32863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
I0927 00:32:58.892644   32863 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:32:58.893219   32863 config.go:182] Loaded profile config "functional-471370": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0927 00:32:58.893663   32863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:32:58.893728   32863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:32:58.909178   32863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42231
I0927 00:32:58.909717   32863 main.go:141] libmachine: () Calling .GetVersion
I0927 00:32:58.910245   32863 main.go:141] libmachine: Using API Version  1
I0927 00:32:58.910267   32863 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:32:58.910608   32863 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:32:58.910866   32863 main.go:141] libmachine: (functional-471370) Calling .GetState
I0927 00:32:58.912898   32863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0927 00:32:58.912945   32863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 00:32:58.928576   32863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
I0927 00:32:58.929130   32863 main.go:141] libmachine: () Calling .GetVersion
I0927 00:32:58.929662   32863 main.go:141] libmachine: Using API Version  1
I0927 00:32:58.929682   32863 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 00:32:58.930038   32863 main.go:141] libmachine: () Calling .GetMachineName
I0927 00:32:58.930230   32863 main.go:141] libmachine: (functional-471370) Calling .DriverName
I0927 00:32:58.930491   32863 ssh_runner.go:195] Run: systemctl --version
I0927 00:32:58.930519   32863 main.go:141] libmachine: (functional-471370) Calling .GetSSHHostname
I0927 00:32:58.933076   32863 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:32:58.933548   32863 main.go:141] libmachine: (functional-471370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:b9:36", ip: ""} in network mk-functional-471370: {Iface:virbr1 ExpiryTime:2024-09-27 01:30:16 +0000 UTC Type:0 Mac:52:54:00:a1:b9:36 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:functional-471370 Clientid:01:52:54:00:a1:b9:36}
I0927 00:32:58.933589   32863 main.go:141] libmachine: (functional-471370) DBG | domain functional-471370 has defined IP address 192.168.39.126 and MAC address 52:54:00:a1:b9:36 in network mk-functional-471370
I0927 00:32:58.933699   32863 main.go:141] libmachine: (functional-471370) Calling .GetSSHPort
I0927 00:32:58.933897   32863 main.go:141] libmachine: (functional-471370) Calling .GetSSHKeyPath
I0927 00:32:58.934097   32863 main.go:141] libmachine: (functional-471370) Calling .GetSSHUsername
I0927 00:32:58.934255   32863 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/functional-471370/id_rsa Username:docker}
I0927 00:32:59.010026   32863 build_images.go:161] Building image from path: /tmp/build.1231978054.tar
I0927 00:32:59.010100   32863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 00:32:59.021552   32863 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1231978054.tar
I0927 00:32:59.026094   32863 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1231978054.tar: stat -c "%s %y" /var/lib/minikube/build/build.1231978054.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1231978054.tar': No such file or directory
I0927 00:32:59.026138   32863 ssh_runner.go:362] scp /tmp/build.1231978054.tar --> /var/lib/minikube/build/build.1231978054.tar (3072 bytes)
I0927 00:32:59.061429   32863 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1231978054
I0927 00:32:59.072106   32863 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1231978054 -xf /var/lib/minikube/build/build.1231978054.tar
I0927 00:32:59.082474   32863 docker.go:360] Building image: /var/lib/minikube/build/build.1231978054
I0927 00:32:59.082557   32863 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-471370 /var/lib/minikube/build/build.1231978054
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B 0.1s done
#5 DONE 0.3s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.3s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:e5e5a9e2095e70cdf8a6a77d7ef4c8e4ce062febd8c8b69aeb343f28bac96684
#8 writing image sha256:e5e5a9e2095e70cdf8a6a77d7ef4c8e4ce062febd8c8b69aeb343f28bac96684 done
#8 naming to localhost/my-image:functional-471370 0.0s done
#8 DONE 0.2s
I0927 00:33:02.500225   32863 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-471370 /var/lib/minikube/build/build.1231978054: (3.417635412s)
I0927 00:33:02.500324   32863 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1231978054
I0927 00:33:02.515651   32863 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1231978054.tar
I0927 00:33:02.530320   32863 build_images.go:217] Built localhost/my-image:functional-471370 from /tmp/build.1231978054.tar
I0927 00:33:02.530356   32863 build_images.go:133] succeeded building to: functional-471370
I0927 00:33:02.530361   32863 build_images.go:134] failed building to: 
I0927 00:33:02.530382   32863 main.go:141] libmachine: Making call to close driver server
I0927 00:33:02.530390   32863 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:33:02.530700   32863 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:33:02.530716   32863 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 00:33:02.530724   32863 main.go:141] libmachine: Making call to close driver server
I0927 00:33:02.530731   32863 main.go:141] libmachine: (functional-471370) Calling .Close
I0927 00:33:02.531031   32863 main.go:141] libmachine: Successfully made call to close driver server
I0927 00:33:02.531073   32863 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.533001887s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-471370
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-471370 docker-env) && out/minikube-linux-amd64 status -p functional-471370"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-471370 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image load --daemon kicbase/echo-server:functional-471370 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image load --daemon kicbase/echo-server:functional-471370 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "335.0473ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.919064ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "281.475512ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.003616ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-471370
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image load --daemon kicbase/echo-server:functional-471370 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdany-port264467774/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397164561544268" to /tmp/TestFunctionalparallelMountCmdany-port264467774/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397164561544268" to /tmp/TestFunctionalparallelMountCmdany-port264467774/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397164561544268" to /tmp/TestFunctionalparallelMountCmdany-port264467774/001/test-1727397164561544268
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.910648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:32:44.776766   22114 retry.go:31] will retry after 692.913966ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:32 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:32 test-1727397164561544268
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh cat /mount-9p/test-1727397164561544268
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-471370 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a7f04c90-b9e0-4a17-83f9-44abf39561ed] Pending
helpers_test.go:344: "busybox-mount" [a7f04c90-b9e0-4a17-83f9-44abf39561ed] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a7f04c90-b9e0-4a17-83f9-44abf39561ed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a7f04c90-b9e0-4a17-83f9-44abf39561ed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003484427s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-471370 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdany-port264467774/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image save kicbase/echo-server:functional-471370 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image rm kicbase/echo-server:functional-471370 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-471370
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 image save --daemon kicbase/echo-server:functional-471370 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-471370
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 service list -o json
functional_test.go:1494: Took "274.430305ms" to run "out/minikube-linux-amd64 -p functional-471370 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.126:31768
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdspecific-port1657675718/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (229.808682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:32:52.641899   22114 retry.go:31] will retry after 472.85771ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdspecific-port1657675718/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh "sudo umount -f /mount-9p": exit status 1 (259.097411ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-471370 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdspecific-port1657675718/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.126:31768
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2952776466/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2952776466/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2952776466/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T" /mount1: exit status 1 (460.81719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:32:54.673088   22114 retry.go:31] will retry after 412.958572ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471370 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-471370 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2952776466/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2952776466/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471370 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2952776466/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-471370
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-471370
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-471370
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (235.76s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-151964 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0927 01:20:25.148137   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-151964 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m12.216257823s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-151964 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-151964 cache add gcr.io/k8s-minikube/gvisor-addon:2: (21.928829679s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-151964 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-151964 addons enable gvisor: (4.197081732s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [03edb57f-65a6-40b8-b8de-a3f653f3b164] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004316568s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-151964 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [903e1634-083d-4d62-9acd-786c855bd627] Pending
helpers_test.go:344: "nginx-gvisor" [903e1634-083d-4d62-9acd-786c855bd627] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [903e1634-083d-4d62-9acd-786c855bd627] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 28.006176362s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-151964
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-151964: (7.334799198s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-151964 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0927 01:22:40.382405   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-151964 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m22.761361046s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [03edb57f-65a6-40b8-b8de-a3f653f3b164] Running
E0927 01:24:03.208343   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005348901s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [903e1634-083d-4d62-9acd-786c855bd627] Running
helpers_test.go:344: "nginx-gvisor" [903e1634-083d-4d62-9acd-786c855bd627] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 6.004979262s
helpers_test.go:175: Cleaning up "gvisor-151964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-151964
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-151964: (1.103662069s)
--- PASS: TestGvisorAddon (235.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (222.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-859802 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0927 00:33:34.892644   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:34.899192   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:34.910781   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:34.932212   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:34.973651   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.055182   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.216743   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:35.539003   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:36.180436   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:37.462442   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:40.023916   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:45.145809   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:33:55.387421   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:34:15.868943   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:34:56.831025   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:36:18.753280   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-859802 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m42.010801187s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (222.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-859802 -- rollout status deployment/busybox: (3.416729067s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-248gw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-49kxh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-7wdkf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-248gw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-49kxh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-7wdkf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-248gw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-49kxh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-7wdkf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-248gw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-248gw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-49kxh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-49kxh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-7wdkf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-859802 -- exec busybox-7dff88458-7wdkf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-859802 -v=7 --alsologtostderr
E0927 00:37:40.382784   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.389259   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.400702   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.422175   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.463607   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.545119   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:40.706654   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:41.028222   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:41.670243   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:42.952084   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:45.514264   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:37:50.636111   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:00.877834   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:38:21.360094   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-859802 -v=7 --alsologtostderr: (1m2.813077513s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-859802 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp testdata/cp-test.txt ha-859802:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2415402021/001/cp-test_ha-859802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802:/home/docker/cp-test.txt ha-859802-m02:/home/docker/cp-test_ha-859802_ha-859802-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test_ha-859802_ha-859802-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802:/home/docker/cp-test.txt ha-859802-m03:/home/docker/cp-test_ha-859802_ha-859802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test_ha-859802_ha-859802-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802:/home/docker/cp-test.txt ha-859802-m04:/home/docker/cp-test_ha-859802_ha-859802-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test_ha-859802_ha-859802-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp testdata/cp-test.txt ha-859802-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2415402021/001/cp-test_ha-859802-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m02:/home/docker/cp-test.txt ha-859802:/home/docker/cp-test_ha-859802-m02_ha-859802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test_ha-859802-m02_ha-859802.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m02:/home/docker/cp-test.txt ha-859802-m03:/home/docker/cp-test_ha-859802-m02_ha-859802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test_ha-859802-m02_ha-859802-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m02:/home/docker/cp-test.txt ha-859802-m04:/home/docker/cp-test_ha-859802-m02_ha-859802-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test_ha-859802-m02_ha-859802-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp testdata/cp-test.txt ha-859802-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2415402021/001/cp-test_ha-859802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m03:/home/docker/cp-test.txt ha-859802:/home/docker/cp-test_ha-859802-m03_ha-859802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test_ha-859802-m03_ha-859802.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m03:/home/docker/cp-test.txt ha-859802-m02:/home/docker/cp-test_ha-859802-m03_ha-859802-m02.txt
E0927 00:38:34.892178   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test_ha-859802-m03_ha-859802-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m03:/home/docker/cp-test.txt ha-859802-m04:/home/docker/cp-test_ha-859802-m03_ha-859802-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test_ha-859802-m03_ha-859802-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp testdata/cp-test.txt ha-859802-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2415402021/001/cp-test_ha-859802-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m04:/home/docker/cp-test.txt ha-859802:/home/docker/cp-test_ha-859802-m04_ha-859802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802 "sudo cat /home/docker/cp-test_ha-859802-m04_ha-859802.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m04:/home/docker/cp-test.txt ha-859802-m02:/home/docker/cp-test_ha-859802-m04_ha-859802-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m02 "sudo cat /home/docker/cp-test_ha-859802-m04_ha-859802-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 cp ha-859802-m04:/home/docker/cp-test.txt ha-859802-m03:/home/docker/cp-test_ha-859802-m04_ha-859802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 ssh -n ha-859802-m03 "sudo cat /home/docker/cp-test_ha-859802-m04_ha-859802-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-859802 node stop m02 -v=7 --alsologtostderr: (13.311097535s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr: exit status 7 (639.76194ms)

                                                
                                                
-- stdout --
	ha-859802
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-859802-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-859802-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-859802-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:38:52.595954   37478 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:38:52.596259   37478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:38:52.596271   37478 out.go:358] Setting ErrFile to fd 2...
	I0927 00:38:52.596278   37478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:38:52.596541   37478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:38:52.596792   37478 out.go:352] Setting JSON to false
	I0927 00:38:52.596834   37478 mustload.go:65] Loading cluster: ha-859802
	I0927 00:38:52.596892   37478 notify.go:220] Checking for updates...
	I0927 00:38:52.597412   37478 config.go:182] Loaded profile config "ha-859802": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:38:52.597437   37478 status.go:174] checking status of ha-859802 ...
	I0927 00:38:52.598091   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.598132   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.613961   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I0927 00:38:52.614424   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.615047   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.615079   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.615455   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.615640   37478 main.go:141] libmachine: (ha-859802) Calling .GetState
	I0927 00:38:52.617437   37478 status.go:364] ha-859802 host status = "Running" (err=<nil>)
	I0927 00:38:52.617456   37478 host.go:66] Checking if "ha-859802" exists ...
	I0927 00:38:52.617887   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.617946   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.633790   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37799
	I0927 00:38:52.634327   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.634903   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.634933   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.635367   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.635558   37478 main.go:141] libmachine: (ha-859802) Calling .GetIP
	I0927 00:38:52.639365   37478 main.go:141] libmachine: (ha-859802) DBG | domain ha-859802 has defined MAC address 52:54:00:b4:a0:71 in network mk-ha-859802
	I0927 00:38:52.639945   37478 main.go:141] libmachine: (ha-859802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a0:71", ip: ""} in network mk-ha-859802: {Iface:virbr1 ExpiryTime:2024-09-27 01:33:45 +0000 UTC Type:0 Mac:52:54:00:b4:a0:71 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-859802 Clientid:01:52:54:00:b4:a0:71}
	I0927 00:38:52.639977   37478 main.go:141] libmachine: (ha-859802) DBG | domain ha-859802 has defined IP address 192.168.39.37 and MAC address 52:54:00:b4:a0:71 in network mk-ha-859802
	I0927 00:38:52.640169   37478 host.go:66] Checking if "ha-859802" exists ...
	I0927 00:38:52.640447   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.640482   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.655811   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45965
	I0927 00:38:52.656277   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.656720   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.656756   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.657223   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.657379   37478 main.go:141] libmachine: (ha-859802) Calling .DriverName
	I0927 00:38:52.657559   37478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:38:52.657596   37478 main.go:141] libmachine: (ha-859802) Calling .GetSSHHostname
	I0927 00:38:52.660334   37478 main.go:141] libmachine: (ha-859802) DBG | domain ha-859802 has defined MAC address 52:54:00:b4:a0:71 in network mk-ha-859802
	I0927 00:38:52.660780   37478 main.go:141] libmachine: (ha-859802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a0:71", ip: ""} in network mk-ha-859802: {Iface:virbr1 ExpiryTime:2024-09-27 01:33:45 +0000 UTC Type:0 Mac:52:54:00:b4:a0:71 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-859802 Clientid:01:52:54:00:b4:a0:71}
	I0927 00:38:52.660810   37478 main.go:141] libmachine: (ha-859802) DBG | domain ha-859802 has defined IP address 192.168.39.37 and MAC address 52:54:00:b4:a0:71 in network mk-ha-859802
	I0927 00:38:52.660969   37478 main.go:141] libmachine: (ha-859802) Calling .GetSSHPort
	I0927 00:38:52.661157   37478 main.go:141] libmachine: (ha-859802) Calling .GetSSHKeyPath
	I0927 00:38:52.661314   37478 main.go:141] libmachine: (ha-859802) Calling .GetSSHUsername
	I0927 00:38:52.661447   37478 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/ha-859802/id_rsa Username:docker}
	I0927 00:38:52.743864   37478 ssh_runner.go:195] Run: systemctl --version
	I0927 00:38:52.752224   37478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:52.768306   37478 kubeconfig.go:125] found "ha-859802" server: "https://192.168.39.254:8443"
	I0927 00:38:52.768339   37478 api_server.go:166] Checking apiserver status ...
	I0927 00:38:52.768370   37478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:38:52.786524   37478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1922/cgroup
	W0927 00:38:52.797070   37478 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1922/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0927 00:38:52.797137   37478 ssh_runner.go:195] Run: ls
	I0927 00:38:52.801990   37478 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0927 00:38:52.808224   37478 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0927 00:38:52.808259   37478 status.go:456] ha-859802 apiserver status = Running (err=<nil>)
	I0927 00:38:52.808272   37478 status.go:176] ha-859802 status: &{Name:ha-859802 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:38:52.808293   37478 status.go:174] checking status of ha-859802-m02 ...
	I0927 00:38:52.808646   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.808698   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.823274   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41851
	I0927 00:38:52.823719   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.824232   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.824253   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.824554   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.824784   37478 main.go:141] libmachine: (ha-859802-m02) Calling .GetState
	I0927 00:38:52.826642   37478 status.go:364] ha-859802-m02 host status = "Stopped" (err=<nil>)
	I0927 00:38:52.826657   37478 status.go:377] host is not running, skipping remaining checks
	I0927 00:38:52.826664   37478 status.go:176] ha-859802-m02 status: &{Name:ha-859802-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:38:52.826684   37478 status.go:174] checking status of ha-859802-m03 ...
	I0927 00:38:52.827055   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.827108   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.841900   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43453
	I0927 00:38:52.842326   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.842788   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.842808   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.843159   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.843345   37478 main.go:141] libmachine: (ha-859802-m03) Calling .GetState
	I0927 00:38:52.844948   37478 status.go:364] ha-859802-m03 host status = "Running" (err=<nil>)
	I0927 00:38:52.844965   37478 host.go:66] Checking if "ha-859802-m03" exists ...
	I0927 00:38:52.845357   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.845404   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.860887   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37087
	I0927 00:38:52.861287   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.861822   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.861843   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.862169   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.862386   37478 main.go:141] libmachine: (ha-859802-m03) Calling .GetIP
	I0927 00:38:52.865328   37478 main.go:141] libmachine: (ha-859802-m03) DBG | domain ha-859802-m03 has defined MAC address 52:54:00:17:07:2a in network mk-ha-859802
	I0927 00:38:52.865798   37478 main.go:141] libmachine: (ha-859802-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:07:2a", ip: ""} in network mk-ha-859802: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:03 +0000 UTC Type:0 Mac:52:54:00:17:07:2a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-859802-m03 Clientid:01:52:54:00:17:07:2a}
	I0927 00:38:52.865826   37478 main.go:141] libmachine: (ha-859802-m03) DBG | domain ha-859802-m03 has defined IP address 192.168.39.38 and MAC address 52:54:00:17:07:2a in network mk-ha-859802
	I0927 00:38:52.865968   37478 host.go:66] Checking if "ha-859802-m03" exists ...
	I0927 00:38:52.866345   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:52.866383   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:52.881297   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0927 00:38:52.881723   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:52.882230   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:52.882256   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:52.882614   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:52.882846   37478 main.go:141] libmachine: (ha-859802-m03) Calling .DriverName
	I0927 00:38:52.883021   37478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:38:52.883047   37478 main.go:141] libmachine: (ha-859802-m03) Calling .GetSSHHostname
	I0927 00:38:52.885859   37478 main.go:141] libmachine: (ha-859802-m03) DBG | domain ha-859802-m03 has defined MAC address 52:54:00:17:07:2a in network mk-ha-859802
	I0927 00:38:52.886306   37478 main.go:141] libmachine: (ha-859802-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:07:2a", ip: ""} in network mk-ha-859802: {Iface:virbr1 ExpiryTime:2024-09-27 01:36:03 +0000 UTC Type:0 Mac:52:54:00:17:07:2a Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-859802-m03 Clientid:01:52:54:00:17:07:2a}
	I0927 00:38:52.886332   37478 main.go:141] libmachine: (ha-859802-m03) DBG | domain ha-859802-m03 has defined IP address 192.168.39.38 and MAC address 52:54:00:17:07:2a in network mk-ha-859802
	I0927 00:38:52.886442   37478 main.go:141] libmachine: (ha-859802-m03) Calling .GetSSHPort
	I0927 00:38:52.886648   37478 main.go:141] libmachine: (ha-859802-m03) Calling .GetSSHKeyPath
	I0927 00:38:52.886889   37478 main.go:141] libmachine: (ha-859802-m03) Calling .GetSSHUsername
	I0927 00:38:52.887076   37478 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/ha-859802-m03/id_rsa Username:docker}
	I0927 00:38:52.970728   37478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:52.988916   37478 kubeconfig.go:125] found "ha-859802" server: "https://192.168.39.254:8443"
	I0927 00:38:52.988948   37478 api_server.go:166] Checking apiserver status ...
	I0927 00:38:52.988987   37478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:38:53.004535   37478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1797/cgroup
	W0927 00:38:53.016064   37478 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1797/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0927 00:38:53.016116   37478 ssh_runner.go:195] Run: ls
	I0927 00:38:53.020917   37478 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0927 00:38:53.025798   37478 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0927 00:38:53.025830   37478 status.go:456] ha-859802-m03 apiserver status = Running (err=<nil>)
	I0927 00:38:53.025839   37478 status.go:176] ha-859802-m03 status: &{Name:ha-859802-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:38:53.025882   37478 status.go:174] checking status of ha-859802-m04 ...
	I0927 00:38:53.026300   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:53.026342   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:53.041930   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40891
	I0927 00:38:53.042407   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:53.042956   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:53.042983   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:53.043376   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:53.043629   37478 main.go:141] libmachine: (ha-859802-m04) Calling .GetState
	I0927 00:38:53.045266   37478 status.go:364] ha-859802-m04 host status = "Running" (err=<nil>)
	I0927 00:38:53.045286   37478 host.go:66] Checking if "ha-859802-m04" exists ...
	I0927 00:38:53.045622   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:53.045661   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:53.060851   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0927 00:38:53.061340   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:53.061818   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:53.061841   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:53.062160   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:53.062356   37478 main.go:141] libmachine: (ha-859802-m04) Calling .GetIP
	I0927 00:38:53.065479   37478 main.go:141] libmachine: (ha-859802-m04) DBG | domain ha-859802-m04 has defined MAC address 52:54:00:21:7e:d5 in network mk-ha-859802
	I0927 00:38:53.065975   37478 main.go:141] libmachine: (ha-859802-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:7e:d5", ip: ""} in network mk-ha-859802: {Iface:virbr1 ExpiryTime:2024-09-27 01:37:36 +0000 UTC Type:0 Mac:52:54:00:21:7e:d5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-859802-m04 Clientid:01:52:54:00:21:7e:d5}
	I0927 00:38:53.066004   37478 main.go:141] libmachine: (ha-859802-m04) DBG | domain ha-859802-m04 has defined IP address 192.168.39.160 and MAC address 52:54:00:21:7e:d5 in network mk-ha-859802
	I0927 00:38:53.066212   37478 host.go:66] Checking if "ha-859802-m04" exists ...
	I0927 00:38:53.066556   37478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:38:53.066599   37478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:38:53.083700   37478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0927 00:38:53.084246   37478 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:38:53.084802   37478 main.go:141] libmachine: Using API Version  1
	I0927 00:38:53.084823   37478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:38:53.085165   37478 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:38:53.085345   37478 main.go:141] libmachine: (ha-859802-m04) Calling .DriverName
	I0927 00:38:53.085530   37478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:38:53.085553   37478 main.go:141] libmachine: (ha-859802-m04) Calling .GetSSHHostname
	I0927 00:38:53.088413   37478 main.go:141] libmachine: (ha-859802-m04) DBG | domain ha-859802-m04 has defined MAC address 52:54:00:21:7e:d5 in network mk-ha-859802
	I0927 00:38:53.088908   37478 main.go:141] libmachine: (ha-859802-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:7e:d5", ip: ""} in network mk-ha-859802: {Iface:virbr1 ExpiryTime:2024-09-27 01:37:36 +0000 UTC Type:0 Mac:52:54:00:21:7e:d5 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-859802-m04 Clientid:01:52:54:00:21:7e:d5}
	I0927 00:38:53.088935   37478 main.go:141] libmachine: (ha-859802-m04) DBG | domain ha-859802-m04 has defined IP address 192.168.39.160 and MAC address 52:54:00:21:7e:d5 in network mk-ha-859802
	I0927 00:38:53.089091   37478 main.go:141] libmachine: (ha-859802-m04) Calling .GetSSHPort
	I0927 00:38:53.089279   37478 main.go:141] libmachine: (ha-859802-m04) Calling .GetSSHKeyPath
	I0927 00:38:53.089427   37478 main.go:141] libmachine: (ha-859802-m04) Calling .GetSSHUsername
	I0927 00:38:53.089546   37478 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/ha-859802-m04/id_rsa Username:docker}
	I0927 00:38:53.174681   37478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:38:53.191126   37478 status.go:176] ha-859802-m04 status: &{Name:ha-859802-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 node start m02 -v=7 --alsologtostderr
E0927 00:39:02.323054   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:39:02.594768   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-859802 node start m02 -v=7 --alsologtostderr: (37.994267716s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-859802 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-859802 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-859802 -v=7 --alsologtostderr: (40.925178938s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-859802 --wait=true -v=7 --alsologtostderr
E0927 00:40:24.244808   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:42:40.382445   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:43:08.086365   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:43:34.892081   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-859802 --wait=true -v=7 --alsologtostderr: (3m37.404926381s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-859802
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-859802 node delete m03 -v=7 --alsologtostderr: (6.631942838s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-859802 stop -v=7 --alsologtostderr: (38.166051409s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr: exit status 7 (107.166956ms)

                                                
                                                
-- stdout --
	ha-859802
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-859802-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-859802-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:44:38.374488   39956 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:44:38.374596   39956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:44:38.374601   39956 out.go:358] Setting ErrFile to fd 2...
	I0927 00:44:38.374606   39956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:44:38.374797   39956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:44:38.374982   39956 out.go:352] Setting JSON to false
	I0927 00:44:38.375003   39956 mustload.go:65] Loading cluster: ha-859802
	I0927 00:44:38.375063   39956 notify.go:220] Checking for updates...
	I0927 00:44:38.375463   39956 config.go:182] Loaded profile config "ha-859802": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:44:38.375485   39956 status.go:174] checking status of ha-859802 ...
	I0927 00:44:38.375896   39956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:44:38.375953   39956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:44:38.396641   39956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0927 00:44:38.397224   39956 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:44:38.397875   39956 main.go:141] libmachine: Using API Version  1
	I0927 00:44:38.397906   39956 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:44:38.398346   39956 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:44:38.398545   39956 main.go:141] libmachine: (ha-859802) Calling .GetState
	I0927 00:44:38.400403   39956 status.go:364] ha-859802 host status = "Stopped" (err=<nil>)
	I0927 00:44:38.400423   39956 status.go:377] host is not running, skipping remaining checks
	I0927 00:44:38.400431   39956 status.go:176] ha-859802 status: &{Name:ha-859802 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:44:38.400485   39956 status.go:174] checking status of ha-859802-m02 ...
	I0927 00:44:38.400779   39956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:44:38.400817   39956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:44:38.415692   39956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45137
	I0927 00:44:38.416135   39956 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:44:38.416676   39956 main.go:141] libmachine: Using API Version  1
	I0927 00:44:38.416711   39956 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:44:38.417082   39956 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:44:38.417292   39956 main.go:141] libmachine: (ha-859802-m02) Calling .GetState
	I0927 00:44:38.419073   39956 status.go:364] ha-859802-m02 host status = "Stopped" (err=<nil>)
	I0927 00:44:38.419090   39956 status.go:377] host is not running, skipping remaining checks
	I0927 00:44:38.419097   39956 status.go:176] ha-859802-m02 status: &{Name:ha-859802-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:44:38.419124   39956 status.go:174] checking status of ha-859802-m04 ...
	I0927 00:44:38.419439   39956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:44:38.419487   39956 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:44:38.434289   39956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34301
	I0927 00:44:38.434940   39956 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:44:38.435489   39956 main.go:141] libmachine: Using API Version  1
	I0927 00:44:38.435510   39956 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:44:38.435845   39956 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:44:38.436025   39956 main.go:141] libmachine: (ha-859802-m04) Calling .GetState
	I0927 00:44:38.437790   39956 status.go:364] ha-859802-m04 host status = "Stopped" (err=<nil>)
	I0927 00:44:38.437807   39956 status.go:377] host is not running, skipping remaining checks
	I0927 00:44:38.437812   39956 status.go:176] ha-859802-m04 status: &{Name:ha-859802-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-859802 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-859802 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m7.591062923s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (128.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-859802 --control-plane -v=7 --alsologtostderr
E0927 00:47:40.383187   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-859802 --control-plane -v=7 --alsologtostderr: (1m19.52033292s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-859802 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-211385 --driver=kvm2 
E0927 00:48:34.896075   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-211385 --driver=kvm2 : (51.986947388s)
--- PASS: TestImageBuild/serial/Setup (51.99s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-211385
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-211385: (2.307273081s)
--- PASS: TestImageBuild/serial/NormalBuild (2.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-211385
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-211385: (1.54680018s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-211385
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-211385: (1.08888836s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-211385
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-329938 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0927 00:49:57.958887   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-329938 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m37.717921817s)
--- PASS: TestJSONOutput/start/Command (97.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-329938 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-329938 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.56s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-329938 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-329938 --output=json --user=testUser: (7.557731906s)
--- PASS: TestJSONOutput/stop/Command (7.56s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-604085 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-604085 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.197359ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"724d0b2b-5693-424b-a155-186c8cde3d5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-604085] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e59db85-3047-4ab5-9a18-f5afc5695718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"c2cb654d-a2c4-4562-a9e5-27c8f7c9bfbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3bea271d-af52-4be6-a417-921571b79d36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig"}}
	{"specversion":"1.0","id":"4f945f06-b7e7-42b9-9cf3-403401552305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube"}}
	{"specversion":"1.0","id":"eb460afb-ad56-402a-befe-9edec0ad0928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e08d9975-dbf7-47a9-a1d9-22c768ed00e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"52a27539-6352-4367-ba97-61b98d11d028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-604085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-604085
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (103.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-344133 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-344133 --driver=kvm2 : (50.846077467s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-355715 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-355715 --driver=kvm2 : (50.118285878s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-344133
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-355715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-355715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-355715
helpers_test.go:175: Cleaning up "first-344133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-344133
E0927 00:52:40.382892   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-344133: (1.011556314s)
--- PASS: TestMinikubeProfile (103.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-142861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-142861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.306358163s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-142861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-142861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-154794 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0927 00:53:34.895734   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-154794 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.052616113s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-154794 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-154794 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-142861 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-154794 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-154794 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-154794
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-154794: (2.281502789s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-154794
E0927 00:54:03.450292   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-154794: (23.978310833s)
--- PASS: TestMountStart/serial/RestartStopped (24.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-154794 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-154794 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-527684 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-527684 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m12.700051783s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-527684 -- rollout status deployment/busybox: (2.696519174s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-jbs6k -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-zsxh5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-jbs6k -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-zsxh5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-jbs6k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-zsxh5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-jbs6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-jbs6k -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-zsxh5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-527684 -- exec busybox-7dff88458-zsxh5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-527684 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-527684 -v 3 --alsologtostderr: (57.473554471s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.05s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-527684 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp testdata/cp-test.txt multinode-527684:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3048961714/001/cp-test_multinode-527684.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684:/home/docker/cp-test.txt multinode-527684-m02:/home/docker/cp-test_multinode-527684_multinode-527684-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m02 "sudo cat /home/docker/cp-test_multinode-527684_multinode-527684-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684:/home/docker/cp-test.txt multinode-527684-m03:/home/docker/cp-test_multinode-527684_multinode-527684-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m03 "sudo cat /home/docker/cp-test_multinode-527684_multinode-527684-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp testdata/cp-test.txt multinode-527684-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3048961714/001/cp-test_multinode-527684-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684-m02:/home/docker/cp-test.txt multinode-527684:/home/docker/cp-test_multinode-527684-m02_multinode-527684.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684 "sudo cat /home/docker/cp-test_multinode-527684-m02_multinode-527684.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684-m02:/home/docker/cp-test.txt multinode-527684-m03:/home/docker/cp-test_multinode-527684-m02_multinode-527684-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m03 "sudo cat /home/docker/cp-test_multinode-527684-m02_multinode-527684-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp testdata/cp-test.txt multinode-527684-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3048961714/001/cp-test_multinode-527684-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684-m03:/home/docker/cp-test.txt multinode-527684:/home/docker/cp-test_multinode-527684-m03_multinode-527684.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684 "sudo cat /home/docker/cp-test_multinode-527684-m03_multinode-527684.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 cp multinode-527684-m03:/home/docker/cp-test.txt multinode-527684-m02:/home/docker/cp-test_multinode-527684-m03_multinode-527684-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 ssh -n multinode-527684-m02 "sudo cat /home/docker/cp-test_multinode-527684-m03_multinode-527684-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-527684 node stop m03: (2.514778559s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-527684 status: exit status 7 (431.067125ms)

                                                
                                                
-- stdout --
	multinode-527684
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-527684-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-527684-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr: exit status 7 (432.235301ms)

                                                
                                                
-- stdout --
	multinode-527684
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-527684-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-527684-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:57:39.584314   48475 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:57:39.584444   48475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:57:39.584452   48475 out.go:358] Setting ErrFile to fd 2...
	I0927 00:57:39.584457   48475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:57:39.584633   48475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 00:57:39.584802   48475 out.go:352] Setting JSON to false
	I0927 00:57:39.584837   48475 mustload.go:65] Loading cluster: multinode-527684
	I0927 00:57:39.584954   48475 notify.go:220] Checking for updates...
	I0927 00:57:39.585225   48475 config.go:182] Loaded profile config "multinode-527684": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:57:39.585244   48475 status.go:174] checking status of multinode-527684 ...
	I0927 00:57:39.585634   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.585673   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.601415   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I0927 00:57:39.601900   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.602522   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.602543   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.603030   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.603220   48475 main.go:141] libmachine: (multinode-527684) Calling .GetState
	I0927 00:57:39.605138   48475 status.go:364] multinode-527684 host status = "Running" (err=<nil>)
	I0927 00:57:39.605157   48475 host.go:66] Checking if "multinode-527684" exists ...
	I0927 00:57:39.605451   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.605514   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.621412   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0927 00:57:39.621950   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.622414   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.622454   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.622786   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.623020   48475 main.go:141] libmachine: (multinode-527684) Calling .GetIP
	I0927 00:57:39.626194   48475 main.go:141] libmachine: (multinode-527684) DBG | domain multinode-527684 has defined MAC address 52:54:00:48:75:01 in network mk-multinode-527684
	I0927 00:57:39.626657   48475 main.go:141] libmachine: (multinode-527684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:75:01", ip: ""} in network mk-multinode-527684: {Iface:virbr1 ExpiryTime:2024-09-27 01:54:26 +0000 UTC Type:0 Mac:52:54:00:48:75:01 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-527684 Clientid:01:52:54:00:48:75:01}
	I0927 00:57:39.626692   48475 main.go:141] libmachine: (multinode-527684) DBG | domain multinode-527684 has defined IP address 192.168.39.56 and MAC address 52:54:00:48:75:01 in network mk-multinode-527684
	I0927 00:57:39.626804   48475 host.go:66] Checking if "multinode-527684" exists ...
	I0927 00:57:39.627116   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.627156   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.642973   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0927 00:57:39.643390   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.643839   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.643869   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.644146   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.644340   48475 main.go:141] libmachine: (multinode-527684) Calling .DriverName
	I0927 00:57:39.644487   48475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:57:39.644513   48475 main.go:141] libmachine: (multinode-527684) Calling .GetSSHHostname
	I0927 00:57:39.647149   48475 main.go:141] libmachine: (multinode-527684) DBG | domain multinode-527684 has defined MAC address 52:54:00:48:75:01 in network mk-multinode-527684
	I0927 00:57:39.647567   48475 main.go:141] libmachine: (multinode-527684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:75:01", ip: ""} in network mk-multinode-527684: {Iface:virbr1 ExpiryTime:2024-09-27 01:54:26 +0000 UTC Type:0 Mac:52:54:00:48:75:01 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-527684 Clientid:01:52:54:00:48:75:01}
	I0927 00:57:39.647614   48475 main.go:141] libmachine: (multinode-527684) DBG | domain multinode-527684 has defined IP address 192.168.39.56 and MAC address 52:54:00:48:75:01 in network mk-multinode-527684
	I0927 00:57:39.647729   48475 main.go:141] libmachine: (multinode-527684) Calling .GetSSHPort
	I0927 00:57:39.647909   48475 main.go:141] libmachine: (multinode-527684) Calling .GetSSHKeyPath
	I0927 00:57:39.648044   48475 main.go:141] libmachine: (multinode-527684) Calling .GetSSHUsername
	I0927 00:57:39.648204   48475 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/multinode-527684/id_rsa Username:docker}
	I0927 00:57:39.733236   48475 ssh_runner.go:195] Run: systemctl --version
	I0927 00:57:39.739431   48475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:57:39.755053   48475 kubeconfig.go:125] found "multinode-527684" server: "https://192.168.39.56:8443"
	I0927 00:57:39.755095   48475 api_server.go:166] Checking apiserver status ...
	I0927 00:57:39.755145   48475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:57:39.769589   48475 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1841/cgroup
	W0927 00:57:39.780252   48475 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1841/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0927 00:57:39.780324   48475 ssh_runner.go:195] Run: ls
	I0927 00:57:39.784931   48475 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I0927 00:57:39.790299   48475 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I0927 00:57:39.790333   48475 status.go:456] multinode-527684 apiserver status = Running (err=<nil>)
	I0927 00:57:39.790346   48475 status.go:176] multinode-527684 status: &{Name:multinode-527684 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:57:39.790362   48475 status.go:174] checking status of multinode-527684-m02 ...
	I0927 00:57:39.790724   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.790780   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.806610   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0927 00:57:39.807113   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.807607   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.807628   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.807945   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.808160   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .GetState
	I0927 00:57:39.809645   48475 status.go:364] multinode-527684-m02 host status = "Running" (err=<nil>)
	I0927 00:57:39.809664   48475 host.go:66] Checking if "multinode-527684-m02" exists ...
	I0927 00:57:39.809977   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.810013   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.825619   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I0927 00:57:39.826115   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.826615   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.826637   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.827031   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.827255   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .GetIP
	I0927 00:57:39.830224   48475 main.go:141] libmachine: (multinode-527684-m02) DBG | domain multinode-527684-m02 has defined MAC address 52:54:00:2c:1f:f6 in network mk-multinode-527684
	I0927 00:57:39.830678   48475 main.go:141] libmachine: (multinode-527684-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:1f:f6", ip: ""} in network mk-multinode-527684: {Iface:virbr1 ExpiryTime:2024-09-27 01:55:40 +0000 UTC Type:0 Mac:52:54:00:2c:1f:f6 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-527684-m02 Clientid:01:52:54:00:2c:1f:f6}
	I0927 00:57:39.830706   48475 main.go:141] libmachine: (multinode-527684-m02) DBG | domain multinode-527684-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:2c:1f:f6 in network mk-multinode-527684
	I0927 00:57:39.830887   48475 host.go:66] Checking if "multinode-527684-m02" exists ...
	I0927 00:57:39.831228   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.831288   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.847438   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I0927 00:57:39.847898   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.848434   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.848457   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.848787   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.848961   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .DriverName
	I0927 00:57:39.849152   48475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:57:39.849171   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .GetSSHHostname
	I0927 00:57:39.852360   48475 main.go:141] libmachine: (multinode-527684-m02) DBG | domain multinode-527684-m02 has defined MAC address 52:54:00:2c:1f:f6 in network mk-multinode-527684
	I0927 00:57:39.852890   48475 main.go:141] libmachine: (multinode-527684-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:1f:f6", ip: ""} in network mk-multinode-527684: {Iface:virbr1 ExpiryTime:2024-09-27 01:55:40 +0000 UTC Type:0 Mac:52:54:00:2c:1f:f6 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-527684-m02 Clientid:01:52:54:00:2c:1f:f6}
	I0927 00:57:39.852921   48475 main.go:141] libmachine: (multinode-527684-m02) DBG | domain multinode-527684-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:2c:1f:f6 in network mk-multinode-527684
	I0927 00:57:39.853143   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .GetSSHPort
	I0927 00:57:39.853347   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .GetSSHKeyPath
	I0927 00:57:39.853553   48475 main.go:141] libmachine: (multinode-527684-m02) Calling .GetSSHUsername
	I0927 00:57:39.853684   48475 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19711-14912/.minikube/machines/multinode-527684-m02/id_rsa Username:docker}
	I0927 00:57:39.938492   48475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:57:39.952995   48475 status.go:176] multinode-527684-m02 status: &{Name:multinode-527684-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 00:57:39.953032   48475 status.go:174] checking status of multinode-527684-m03 ...
	I0927 00:57:39.953419   48475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 00:57:39.953459   48475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 00:57:39.968921   48475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I0927 00:57:39.969399   48475 main.go:141] libmachine: () Calling .GetVersion
	I0927 00:57:39.969910   48475 main.go:141] libmachine: Using API Version  1
	I0927 00:57:39.969940   48475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 00:57:39.970295   48475 main.go:141] libmachine: () Calling .GetMachineName
	I0927 00:57:39.970482   48475 main.go:141] libmachine: (multinode-527684-m03) Calling .GetState
	I0927 00:57:39.972144   48475 status.go:364] multinode-527684-m03 host status = "Stopped" (err=<nil>)
	I0927 00:57:39.972163   48475 status.go:377] host is not running, skipping remaining checks
	I0927 00:57:39.972170   48475 status.go:176] multinode-527684-m03 status: &{Name:multinode-527684-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 node start m03 -v=7 --alsologtostderr
E0927 00:57:40.383165   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-527684 node start m03 -v=7 --alsologtostderr: (42.07963073s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-527684
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-527684
E0927 00:58:34.895802   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-527684: (28.133255355s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-527684 --wait=true -v=8 --alsologtostderr
E0927 01:02:40.383110   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-527684 --wait=true -v=8 --alsologtostderr: (4m37.068791823s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-527684
--- PASS: TestMultiNode/serial/RestartKeepsNodes (305.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-527684 node delete m03: (1.718051771s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 stop
E0927 01:03:34.896309   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-527684 stop: (25.6959937s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-527684 status: exit status 7 (86.558048ms)

                                                
                                                
-- stdout --
	multinode-527684
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-527684-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr: exit status 7 (84.058813ms)

                                                
                                                
-- stdout --
	multinode-527684
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-527684-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:03:56.110444   51186 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:03:56.110547   51186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:03:56.110555   51186 out.go:358] Setting ErrFile to fd 2...
	I0927 01:03:56.110559   51186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:03:56.110789   51186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-14912/.minikube/bin
	I0927 01:03:56.111094   51186 out.go:352] Setting JSON to false
	I0927 01:03:56.111130   51186 mustload.go:65] Loading cluster: multinode-527684
	I0927 01:03:56.111262   51186 notify.go:220] Checking for updates...
	I0927 01:03:56.111542   51186 config.go:182] Loaded profile config "multinode-527684": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 01:03:56.111569   51186 status.go:174] checking status of multinode-527684 ...
	I0927 01:03:56.112082   51186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 01:03:56.112128   51186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:03:56.128036   51186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0927 01:03:56.128446   51186 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:03:56.129074   51186 main.go:141] libmachine: Using API Version  1
	I0927 01:03:56.129105   51186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:03:56.129605   51186 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:03:56.129847   51186 main.go:141] libmachine: (multinode-527684) Calling .GetState
	I0927 01:03:56.132028   51186 status.go:364] multinode-527684 host status = "Stopped" (err=<nil>)
	I0927 01:03:56.132043   51186 status.go:377] host is not running, skipping remaining checks
	I0927 01:03:56.132049   51186 status.go:176] multinode-527684 status: &{Name:multinode-527684 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:03:56.132087   51186 status.go:174] checking status of multinode-527684-m02 ...
	I0927 01:03:56.132394   51186 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0927 01:03:56.132437   51186 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 01:03:56.147400   51186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0927 01:03:56.147846   51186 main.go:141] libmachine: () Calling .GetVersion
	I0927 01:03:56.148363   51186 main.go:141] libmachine: Using API Version  1
	I0927 01:03:56.148385   51186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 01:03:56.148712   51186 main.go:141] libmachine: () Calling .GetMachineName
	I0927 01:03:56.148892   51186 main.go:141] libmachine: (multinode-527684-m02) Calling .GetState
	I0927 01:03:56.150493   51186 status.go:364] multinode-527684-m02 host status = "Stopped" (err=<nil>)
	I0927 01:03:56.150510   51186 status.go:377] host is not running, skipping remaining checks
	I0927 01:03:56.150515   51186 status.go:176] multinode-527684-m02 status: &{Name:multinode-527684-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (119.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-527684 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-527684 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m59.144346789s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-527684 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (119.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-527684
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-527684-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-527684-m02 --driver=kvm2 : exit status 14 (65.374205ms)

                                                
                                                
-- stdout --
	* [multinode-527684-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-527684-m02' is duplicated with machine name 'multinode-527684-m02' in profile 'multinode-527684'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-527684-m03 --driver=kvm2 
E0927 01:06:37.961113   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-527684-m03 --driver=kvm2 : (53.553705637s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-527684
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-527684: exit status 80 (216.091196ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-527684 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-527684-m03 already exists in multinode-527684-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-527684-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.66s)

                                                
                                    
x
+
TestPreload (185.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-371020 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0927 01:07:40.382723   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:08:34.892961   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-371020 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m58.469148727s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-371020 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-371020 image pull gcr.io/k8s-minikube/busybox: (1.665792345s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-371020
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-371020: (12.569088622s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-371020 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-371020 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (51.172654039s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-371020 image list
helpers_test.go:175: Cleaning up "test-preload-371020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-371020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-371020: (1.097086389s)
--- PASS: TestPreload (185.17s)

                                                
                                    
x
+
TestScheduledStopUnix (121.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-915816 --memory=2048 --driver=kvm2 
E0927 01:10:43.454513   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-915816 --memory=2048 --driver=kvm2 : (49.803751174s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-915816 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-915816 -n scheduled-stop-915816
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-915816 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 01:10:47.658513   22114 retry.go:31] will retry after 101.821µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.659707   22114 retry.go:31] will retry after 223.053µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.660904   22114 retry.go:31] will retry after 298.57µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.662076   22114 retry.go:31] will retry after 384.852µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.663249   22114 retry.go:31] will retry after 603.607µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.664364   22114 retry.go:31] will retry after 827.959µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.665526   22114 retry.go:31] will retry after 657.94µs: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.666729   22114 retry.go:31] will retry after 1.946379ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.668928   22114 retry.go:31] will retry after 3.290832ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.673221   22114 retry.go:31] will retry after 4.636069ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.678492   22114 retry.go:31] will retry after 5.36693ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.684775   22114 retry.go:31] will retry after 7.471407ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.693092   22114 retry.go:31] will retry after 10.95232ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.704377   22114 retry.go:31] will retry after 10.809079ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
I0927 01:10:47.715657   22114 retry.go:31] will retry after 39.676908ms: open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/scheduled-stop-915816/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-915816 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-915816 -n scheduled-stop-915816
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-915816
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-915816 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-915816
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-915816: exit status 7 (64.162298ms)

                                                
                                                
-- stdout --
	scheduled-stop-915816
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-915816 -n scheduled-stop-915816
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-915816 -n scheduled-stop-915816: exit status 7 (63.872705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-915816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-915816
--- PASS: TestScheduledStopUnix (121.47s)

                                                
                                    
x
+
TestSkaffold (136.26s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1546356516 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-410166 --memory=2600 --driver=kvm2 
E0927 01:12:40.382629   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-410166 --memory=2600 --driver=kvm2 : (51.987453704s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1546356516 run --minikube-profile skaffold-410166 --kube-context skaffold-410166 --status-check=true --port-forward=false --interactive=false
E0927 01:13:34.895888   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1546356516 run --minikube-profile skaffold-410166 --kube-context skaffold-410166 --status-check=true --port-forward=false --interactive=false: (1m11.507440303s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-b868b876b-n96rz" [0b55e527-af68-4fbe-a1dc-b9dc9b324f82] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004632531s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5d7f4dbb8b-f6kjw" [676fb559-d433-4580-9ee1-f8b06cc1a554] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004201955s
helpers_test.go:175: Cleaning up "skaffold-410166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-410166
--- PASS: TestSkaffold (136.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (119.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3956252785 start -p running-upgrade-908414 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3956252785 start -p running-upgrade-908414 --memory=2200 --vm-driver=kvm2 : (54.309513408s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-908414 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0927 01:19:13.462363   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:23.704356   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-908414 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m3.645958222s)
helpers_test.go:175: Cleaning up "running-upgrade-908414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-908414
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-908414: (1.18948721s)
--- PASS: TestRunningBinaryUpgrade (119.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (243.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (56.721722641s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-816310
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-816310: (12.561509036s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-816310 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-816310 status --format={{.Host}}: exit status 7 (65.032058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (1m24.630341189s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-816310 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (89.02241ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-816310] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-816310
	    minikube start -p kubernetes-upgrade-816310 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8163102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-816310 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-816310 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (1m27.664575272s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-816310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-816310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-816310: (1.307446548s)
--- PASS: TestKubernetesUpgrade (243.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.274339608 start -p stopped-upgrade-468384 --memory=2200 --vm-driver=kvm2 
E0927 01:17:40.383099   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.274339608 start -p stopped-upgrade-468384 --memory=2200 --vm-driver=kvm2 : (1m21.763007052s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.274339608 -p stopped-upgrade-468384 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.274339608 -p stopped-upgrade-468384 stop: (13.181496272s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-468384 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0927 01:19:03.208981   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.215468   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.226962   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.248465   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.289972   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.371459   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.533105   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:03.854877   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:04.497000   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:05.778491   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:19:08.340034   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-468384 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (59.68753007s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246286 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-246286 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (66.226871ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-246286] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-14912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-14912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (74.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246286 --driver=kvm2 
E0927 01:18:34.892888   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246286 --driver=kvm2 : (1m14.489806341s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-246286 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (74.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246286 --no-kubernetes --driver=kvm2 
E0927 01:19:44.186691   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246286 --no-kubernetes --driver=kvm2 : (16.749189204s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-246286 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-246286 status -o json: exit status 2 (237.404709ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-246286","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-246286
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-246286: (1.016592962s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246286 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246286 --no-kubernetes --driver=kvm2 : (35.61329557s)
--- PASS: TestNoKubernetes/serial/Start (35.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-468384
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-468384: (1.453801773s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestPause/serial/Start (118.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-665684 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-665684 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m58.308212182s)
--- PASS: TestPause/serial/Start (118.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-246286 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-246286 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.607235ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-246286
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-246286: (2.290148651s)
--- PASS: TestNoKubernetes/serial/Stop (2.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (65.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-246286 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-246286 --driver=kvm2 : (1m5.566924687s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (65.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-246286 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-246286 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.296178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-914384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0927 01:21:47.070302   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-914384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m12.956385681s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (60.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-665684 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-665684 --alsologtostderr -v=1 --driver=kvm2 : (1m0.324406343s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (60.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-680738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-680738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m17.290868295s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.29s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-665684 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-665684 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-665684 --output=json --layout=cluster: exit status 2 (304.664784ms)

                                                
                                                
-- stdout --
	{"Name":"pause-665684","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-665684","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-665684 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-665684 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.41s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-665684 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-665684 --alsologtostderr -v=5: (1.412925855s)
--- PASS: TestPause/serial/DeletePaused (1.41s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-654811 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0927 01:23:17.962715   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:23:34.892340   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-654811 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m14.642527071s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-680738 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8cd0b9b-ce9e-4ecd-a36f-f172f5bcedc4] Pending
helpers_test.go:344: "busybox" [f8cd0b9b-ce9e-4ecd-a36f-f172f5bcedc4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8cd0b9b-ce9e-4ecd-a36f-f172f5bcedc4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003838046s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-680738 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.333618343s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-680738 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-680738 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-680738 --alsologtostderr -v=3: (13.354919661s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-914384 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35282c95-1312-4118-8cbc-bf13d5b5ced2] Pending
helpers_test.go:344: "busybox" [35282c95-1312-4118-8cbc-bf13d5b5ced2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [35282c95-1312-4118-8cbc-bf13d5b5ced2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005488269s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-914384 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-914384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-914384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067321757s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-914384 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-914384 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-914384 --alsologtostderr -v=3: (13.421298913s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680738 -n no-preload-680738
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680738 -n no-preload-680738: exit status 7 (125.784579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-680738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (298.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-680738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-680738 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (4m58.60646599s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680738 -n no-preload-680738
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (298.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914384 -n old-k8s-version-914384
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914384 -n old-k8s-version-914384: exit status 7 (85.124178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-914384 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (537.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-914384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-914384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (8m56.747912143s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-914384 -n old-k8s-version-914384
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (537.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-879474 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-879474 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m34.951810479s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-654811 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8be9d04f-4175-4b53-b217-e31edc645f3d] Pending
helpers_test.go:344: "busybox" [8be9d04f-4175-4b53-b217-e31edc645f3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8be9d04f-4175-4b53-b217-e31edc645f3d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004984529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-654811 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-654811 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-654811 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-654811 --alsologtostderr -v=3
E0927 01:24:30.912619   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-654811 --alsologtostderr -v=3: (13.366081194s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654811 -n embed-certs-654811
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654811 -n embed-certs-654811: exit status 7 (77.224547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-654811 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (353.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-654811 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-654811 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (5m53.436828779s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654811 -n embed-certs-654811
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (353.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-879474 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5163630e-9c0c-42a8-aba0-7fed638fad8d] Pending
helpers_test.go:344: "busybox" [5163630e-9c0c-42a8-aba0-7fed638fad8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5163630e-9c0c-42a8-aba0-7fed638fad8d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006313136s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-879474 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-879474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-879474 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-879474 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-879474 --alsologtostderr -v=3: (13.339990034s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474: exit status 7 (71.445526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-879474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-879474 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0927 01:26:57.114150   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.120713   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.132156   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.153670   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.195134   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.276639   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.438230   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:57.759986   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:58.402152   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:26:59.683483   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:02.245060   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:07.366452   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:17.608384   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:23.456100   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:38.090364   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:27:40.382450   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/functional-471370/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:19.052263   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:28:34.892813   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-879474 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m13.605909057s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bj5v2" [e6a5479f-af7a-4539-bea0-b2d8d8e7f0b8] Running
E0927 01:29:03.208903   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005161658s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bj5v2" [e6a5479f-af7a-4539-bea0-b2d8d8e7f0b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004640861s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-680738 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-680738 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-680738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680738 -n no-preload-680738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680738 -n no-preload-680738: exit status 2 (257.672808ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-680738 -n no-preload-680738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-680738 -n no-preload-680738: exit status 2 (263.604901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-680738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680738 -n no-preload-680738
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-680738 -n no-preload-680738
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-284712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0927 01:29:40.974483   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-284712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (1m0.936488463s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-284712 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-284712 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-284712 --alsologtostderr -v=3: (8.336919043s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-284712 -n newest-cni-284712
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-284712 -n newest-cni-284712: exit status 7 (72.433416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-284712 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-284712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-284712 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (41.7001835s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-284712 -n newest-cni-284712
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hbl87" [6ebefcdb-24df-4400-bf57-a3d21dc52a4f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004585239s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hbl87" [6ebefcdb-24df-4400-bf57-a3d21dc52a4f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004692481s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-654811 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-654811 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-654811 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654811 -n embed-certs-654811
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654811 -n embed-certs-654811: exit status 2 (247.697047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-654811 -n embed-certs-654811
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-654811 -n embed-certs-654811: exit status 2 (255.211302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-654811 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654811 -n embed-certs-654811
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-654811 -n embed-certs-654811
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m40.259193685s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-284712 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-284712 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-284712 -n newest-cni-284712
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-284712 -n newest-cni-284712: exit status 2 (249.706618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-284712 -n newest-cni-284712
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-284712 -n newest-cni-284712: exit status 2 (261.979982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-284712 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-284712 -n newest-cni-284712
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-284712 -n newest-cni-284712
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m32.942430491s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vmrnq" [a6767f75-b682-4ea3-b7a6-eba9ac977ab4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vmrnq" [a6767f75-b682-4ea3-b7a6-eba9ac977ab4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004551989s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vmrnq" [a6767f75-b682-4ea3-b7a6-eba9ac977ab4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005262779s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-879474 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-879474 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-879474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474: exit status 2 (243.10329ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474: exit status 2 (259.237941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-879474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-879474 -n default-k8s-diff-port-879474
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0927 01:31:57.114488   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:32:24.816764   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/gvisor-151964/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m33.861271048s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-244328 "pgrep -a kubelet"
I0927 01:32:28.590300   22114 config.go:182] Loaded profile config "auto-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hhq4n" [3cf9933e-a713-4d82-9846-48f024f6d147] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hhq4n" [3cf9933e-a713-4d82-9846-48f024f6d147] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004193142s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rqzc5" [86b7475a-a110-4ca3-af0e-b9069863b8e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004827208s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-244328 "pgrep -a kubelet"
I0927 01:32:49.916695   22114 config.go:182] Loaded profile config "kindnet-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-244328 replace --force -f testdata/netcat-deployment.yaml
I0927 01:32:50.282177   22114 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xb8w8" [f0f5f2e1-7f06-4cf3-bf04-91185fac6030] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xb8w8" [f0f5f2e1-7f06-4cf3-bf04-91185fac6030] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004415602s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m11.339990337s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-t582l" [02debf2d-4f66-4074-b8d5-98ceff5d3a33] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005005888s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-t582l" [02debf2d-4f66-4074-b8d5-98ceff5d3a33] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004079998s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-914384 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (107.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m47.006671873s)
--- PASS: TestNetworkPlugins/group/false/Start (107.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mvscb" [d5a8c3c4-40f7-4f8a-ae22-a9292efea4fc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005587302s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-914384 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-914384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914384 -n old-k8s-version-914384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914384 -n old-k8s-version-914384: exit status 2 (252.586151ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-914384 -n old-k8s-version-914384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-914384 -n old-k8s-version-914384: exit status 2 (245.647775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-914384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-914384 -n old-k8s-version-914384
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-914384 -n old-k8s-version-914384
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)
E0927 01:35:59.916122   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-244328 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (120.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
I0927 01:33:25.840716   22114 config.go:182] Loaded profile config "calico-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m0.517308795s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (120.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-244328 replace --force -f testdata/netcat-deployment.yaml
I0927 01:33:26.082222   22114 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z22wx" [9a45bc89-3739-4fed-a52b-2fcd297b1a8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z22wx" [9a45bc89-3739-4fed-a52b-2fcd297b1a8b] Running
E0927 01:33:34.892601   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/addons-921129/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.502972   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.509477   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.520926   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.542452   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.583919   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.665480   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:35.827600   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:36.149427   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:36.791744   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:33:38.074098   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004473758s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (103.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E0927 01:33:55.998934   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:34:00.329419   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/old-k8s-version-914384/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:34:03.209114   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m43.250759526s)
--- PASS: TestNetworkPlugins/group/flannel/Start (103.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-244328 "pgrep -a kubelet"
I0927 01:34:09.261111   22114 config.go:182] Loaded profile config "custom-flannel-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jvrl4" [c55c457d-8545-45ed-9f8a-2b54d867bdff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:34:10.571363   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/old-k8s-version-914384/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jvrl4" [c55c457d-8545-45ed-9f8a-2b54d867bdff] Running
E0927 01:34:16.481399   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004646349s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0927 01:34:57.443887   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/no-preload-680738/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m16.632342389s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-244328 "pgrep -a kubelet"
I0927 01:35:06.679501   22114 config.go:182] Loaded profile config "false-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-79858" [cb0d2d03-258e-4a02-a4ee-dbe79c81015d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-79858" [cb0d2d03-258e-4a02-a4ee-dbe79c81015d] Running
E0927 01:35:12.015669   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/old-k8s-version-914384/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004864265s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-244328 "pgrep -a kubelet"
E0927 01:35:26.273966   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/skaffold-410166/client.crt: no such file or directory" logger="UnhandledError"
I0927 01:35:26.604238   22114 config.go:182] Loaded profile config "enable-default-cni-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-244328 replace --force -f testdata/netcat-deployment.yaml: (1.913395933s)
I0927 01:35:28.527773   22114 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0927 01:35:28.697675   22114 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sbvc5" [71c69ef2-16c8-4045-b75e-c5193d45f770] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sbvc5" [71c69ef2-16c8-4045-b75e-c5193d45f770] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.00465825s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (92.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-244328 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m32.849272656s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (92.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6wncc" [02f5edb7-0951-486d-b9f1-249fafb9d60d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006674129s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-244328 "pgrep -a kubelet"
I0927 01:35:44.878558   22114 config.go:182] Loaded profile config "flannel-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4nf54" [15973bd6-4825-48e0-8765-f41c610a0921] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:35:49.661033   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:49.667691   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:49.679366   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:49.700836   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:49.742529   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:49.824720   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:49.986476   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:50.308327   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:50.949979   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4nf54" [15973bd6-4825-48e0-8765-f41c610a0921] Running
E0927 01:35:52.231315   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:35:54.793087   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004444858s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-244328 "pgrep -a kubelet"
I0927 01:35:56.654073   22114 config.go:182] Loaded profile config "bridge-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nsrfb" [081ab738-f5a1-496f-8554-5e205bf97910] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nsrfb" [081ab738-f5a1-496f-8554-5e205bf97910] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004750999s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-244328 "pgrep -a kubelet"
I0927 01:37:09.772136   22114 config.go:182] Loaded profile config "kubenet-244328": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-244328 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tsfsv" [fe63f261-7d7b-441d-982b-e6e686aeae6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:37:11.601204   22114 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/default-k8s-diff-port-879474/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-tsfsv" [fe63f261-7d7b-441d-982b-e6e686aeae6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.011751169s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-244328 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-244328 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-003954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-003954
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-244328 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-244328" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19711-14912/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:17:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.126:8443
name: cert-expiration-276339
contexts:
- context:
cluster: cert-expiration-276339
extensions:
- extension:
last-update: Fri, 27 Sep 2024 01:17:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-276339
name: cert-expiration-276339
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-276339
user:
client-certificate: /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/cert-expiration-276339/client.crt
client-key: /home/jenkins/minikube-integration/19711-14912/.minikube/profiles/cert-expiration-276339/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-244328

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-244328" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244328"

                                                
                                                
----------------------- debugLogs end: cilium-244328 [took: 3.091184679s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-244328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-244328
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
Copied to clipboard