Test Report: KVM_Linux 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (1/341)

Order failed test Duration
33 TestAddons/parallel/Registry 73.47
x
+
TestAddons/parallel/Registry (73.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.367206ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-smm7x" [998a3900-52e0-4945-9a7d-442a928ba481] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004044034s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jghxw" [0347148d-375f-49b2-a422-6401b38ca5fe] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004229714s
addons_test.go:342: (dbg) Run:  kubectl --context addons-214113 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-214113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-214113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080229172s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-214113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 ip
2024/09/16 17:23:31 [DEBUG] GET http://192.168.39.110:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-214113 -n addons-214113
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-590311                                                                     | download-only-590311 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC | 16 Sep 24 17:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-881316 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC |                     |
	|         | binary-mirror-881316                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40603                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-881316                                                                     | binary-mirror-881316 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC | 16 Sep 24 17:09 UTC |
	| addons  | enable dashboard -p                                                                         | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC |                     |
	|         | addons-214113                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC |                     |
	|         | addons-214113                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-214113 --wait=true                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC | 16 Sep 24 17:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:14 UTC | 16 Sep 24 17:14 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-214113 addons                                                                        | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:22 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-214113 ssh cat                                                                       | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:22 UTC |
	|         | /opt/local-path-provisioner/pvc-17170fae-194c-46e0-85da-9bafa109dae7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:23 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:22 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:22 UTC |
	|         | -p addons-214113                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:22 UTC | 16 Sep 24 17:23 UTC |
	|         | addons-214113                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | addons-214113                                                                               |                      |         |         |                     |                     |
	| addons  | addons-214113 addons                                                                        | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | -p addons-214113                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-214113 addons                                                                        | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-214113 ssh curl -s                                                                   | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-214113 ip                                                                            | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-214113 ip                                                                            | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	| addons  | addons-214113 addons disable                                                                | addons-214113        | jenkins | v1.34.0 | 16 Sep 24 17:23 UTC | 16 Sep 24 17:23 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:09:58
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:09:58.160028  383672 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:09:58.160108  383672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:09:58.160115  383672 out.go:358] Setting ErrFile to fd 2...
	I0916 17:09:58.160119  383672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:09:58.160267  383672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:09:58.160764  383672 out.go:352] Setting JSON to false
	I0916 17:09:58.161677  383672 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3142,"bootTime":1726503456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:09:58.161767  383672 start.go:139] virtualization: kvm guest
	I0916 17:09:58.163309  383672 out.go:177] * [addons-214113] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:09:58.164380  383672 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:09:58.164388  383672 notify.go:220] Checking for updates...
	I0916 17:09:58.166606  383672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:09:58.167862  383672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	I0916 17:09:58.168855  383672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:09:58.169845  383672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:09:58.170890  383672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:09:58.172016  383672 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:09:58.201470  383672 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 17:09:58.202423  383672 start.go:297] selected driver: kvm2
	I0916 17:09:58.202433  383672 start.go:901] validating driver "kvm2" against <nil>
	I0916 17:09:58.202443  383672 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:09:58.203077  383672 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:09:58.203172  383672 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-375661/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 17:09:58.216623  383672 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 17:09:58.216669  383672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:09:58.216892  383672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:09:58.216922  383672 cni.go:84] Creating CNI manager for ""
	I0916 17:09:58.216996  383672 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:09:58.217013  383672 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:09:58.217100  383672 start.go:340] cluster config:
	{Name:addons-214113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-214113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:09:58.217203  383672 iso.go:125] acquiring lock: {Name:mk520a410f89666950ce2caf9879a799775a7873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:09:58.218610  383672 out.go:177] * Starting "addons-214113" primary control-plane node in "addons-214113" cluster
	I0916 17:09:58.219615  383672 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:09:58.219645  383672 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 17:09:58.219654  383672 cache.go:56] Caching tarball of preloaded images
	I0916 17:09:58.219721  383672 preload.go:172] Found /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 17:09:58.219738  383672 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 17:09:58.220015  383672 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/config.json ...
	I0916 17:09:58.220035  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/config.json: {Name:mka52db1f2ff7c6614af70a0b407663cc334ba8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:09:58.220180  383672 start.go:360] acquireMachinesLock for addons-214113: {Name:mkaff6a46af6b8467c555dc416e6ec03d007904f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 17:09:58.220238  383672 start.go:364] duration metric: took 41.898µs to acquireMachinesLock for "addons-214113"
	I0916 17:09:58.220262  383672 start.go:93] Provisioning new machine with config: &{Name:addons-214113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-214113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 17:09:58.220316  383672 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 17:09:58.221660  383672 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 17:09:58.221799  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:09:58.221847  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:09:58.234877  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0916 17:09:58.235382  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:09:58.235949  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:09:58.235968  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:09:58.236329  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:09:58.236481  383672 main.go:141] libmachine: (addons-214113) Calling .GetMachineName
	I0916 17:09:58.236629  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:09:58.236742  383672 start.go:159] libmachine.API.Create for "addons-214113" (driver="kvm2")
	I0916 17:09:58.236772  383672 client.go:168] LocalClient.Create starting
	I0916 17:09:58.236807  383672 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca.pem
	I0916 17:09:58.489902  383672 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/cert.pem
	I0916 17:09:58.612735  383672 main.go:141] libmachine: Running pre-create checks...
	I0916 17:09:58.612757  383672 main.go:141] libmachine: (addons-214113) Calling .PreCreateCheck
	I0916 17:09:58.613160  383672 main.go:141] libmachine: (addons-214113) Calling .GetConfigRaw
	I0916 17:09:58.613565  383672 main.go:141] libmachine: Creating machine...
	I0916 17:09:58.613580  383672 main.go:141] libmachine: (addons-214113) Calling .Create
	I0916 17:09:58.613734  383672 main.go:141] libmachine: (addons-214113) Creating KVM machine...
	I0916 17:09:58.614924  383672 main.go:141] libmachine: (addons-214113) DBG | found existing default KVM network
	I0916 17:09:58.615784  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:09:58.615633  383694 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I0916 17:09:58.615825  383672 main.go:141] libmachine: (addons-214113) DBG | created network xml: 
	I0916 17:09:58.615844  383672 main.go:141] libmachine: (addons-214113) DBG | <network>
	I0916 17:09:58.615853  383672 main.go:141] libmachine: (addons-214113) DBG |   <name>mk-addons-214113</name>
	I0916 17:09:58.615859  383672 main.go:141] libmachine: (addons-214113) DBG |   <dns enable='no'/>
	I0916 17:09:58.615899  383672 main.go:141] libmachine: (addons-214113) DBG |   
	I0916 17:09:58.615932  383672 main.go:141] libmachine: (addons-214113) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 17:09:58.615975  383672 main.go:141] libmachine: (addons-214113) DBG |     <dhcp>
	I0916 17:09:58.616002  383672 main.go:141] libmachine: (addons-214113) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 17:09:58.616013  383672 main.go:141] libmachine: (addons-214113) DBG |     </dhcp>
	I0916 17:09:58.616021  383672 main.go:141] libmachine: (addons-214113) DBG |   </ip>
	I0916 17:09:58.616026  383672 main.go:141] libmachine: (addons-214113) DBG |   
	I0916 17:09:58.616031  383672 main.go:141] libmachine: (addons-214113) DBG | </network>
	I0916 17:09:58.616039  383672 main.go:141] libmachine: (addons-214113) DBG | 
	I0916 17:09:58.621209  383672 main.go:141] libmachine: (addons-214113) DBG | trying to create private KVM network mk-addons-214113 192.168.39.0/24...
	I0916 17:09:58.684139  383672 main.go:141] libmachine: (addons-214113) DBG | private KVM network mk-addons-214113 192.168.39.0/24 created
	I0916 17:09:58.684175  383672 main.go:141] libmachine: (addons-214113) Setting up store path in /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113 ...
	I0916 17:09:58.684196  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:09:58.684113  383694 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:09:58.684214  383672 main.go:141] libmachine: (addons-214113) Building disk image from file:///home/jenkins/minikube-integration/19649-375661/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 17:09:58.684262  383672 main.go:141] libmachine: (addons-214113) Downloading /home/jenkins/minikube-integration/19649-375661/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-375661/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 17:09:58.973901  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:09:58.973767  383694 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa...
	I0916 17:09:59.177848  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:09:59.177716  383694 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/addons-214113.rawdisk...
	I0916 17:09:59.177882  383672 main.go:141] libmachine: (addons-214113) DBG | Writing magic tar header
	I0916 17:09:59.177893  383672 main.go:141] libmachine: (addons-214113) DBG | Writing SSH key tar header
	I0916 17:09:59.177900  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:09:59.177834  383694 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113 ...
	I0916 17:09:59.177968  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113
	I0916 17:09:59.177991  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-375661/.minikube/machines
	I0916 17:09:59.178018  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:09:59.178032  383672 main.go:141] libmachine: (addons-214113) Setting executable bit set on /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113 (perms=drwx------)
	I0916 17:09:59.178045  383672 main.go:141] libmachine: (addons-214113) Setting executable bit set on /home/jenkins/minikube-integration/19649-375661/.minikube/machines (perms=drwxr-xr-x)
	I0916 17:09:59.178054  383672 main.go:141] libmachine: (addons-214113) Setting executable bit set on /home/jenkins/minikube-integration/19649-375661/.minikube (perms=drwxr-xr-x)
	I0916 17:09:59.178064  383672 main.go:141] libmachine: (addons-214113) Setting executable bit set on /home/jenkins/minikube-integration/19649-375661 (perms=drwxrwxr-x)
	I0916 17:09:59.178073  383672 main.go:141] libmachine: (addons-214113) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 17:09:59.178088  383672 main.go:141] libmachine: (addons-214113) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 17:09:59.178101  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-375661
	I0916 17:09:59.178110  383672 main.go:141] libmachine: (addons-214113) Creating domain...
	I0916 17:09:59.178155  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 17:09:59.178180  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home/jenkins
	I0916 17:09:59.178192  383672 main.go:141] libmachine: (addons-214113) DBG | Checking permissions on dir: /home
	I0916 17:09:59.178203  383672 main.go:141] libmachine: (addons-214113) DBG | Skipping /home - not owner
	I0916 17:09:59.178978  383672 main.go:141] libmachine: (addons-214113) define libvirt domain using xml: 
	I0916 17:09:59.178996  383672 main.go:141] libmachine: (addons-214113) <domain type='kvm'>
	I0916 17:09:59.179048  383672 main.go:141] libmachine: (addons-214113)   <name>addons-214113</name>
	I0916 17:09:59.179075  383672 main.go:141] libmachine: (addons-214113)   <memory unit='MiB'>4000</memory>
	I0916 17:09:59.179102  383672 main.go:141] libmachine: (addons-214113)   <vcpu>2</vcpu>
	I0916 17:09:59.179119  383672 main.go:141] libmachine: (addons-214113)   <features>
	I0916 17:09:59.179125  383672 main.go:141] libmachine: (addons-214113)     <acpi/>
	I0916 17:09:59.179133  383672 main.go:141] libmachine: (addons-214113)     <apic/>
	I0916 17:09:59.179141  383672 main.go:141] libmachine: (addons-214113)     <pae/>
	I0916 17:09:59.179148  383672 main.go:141] libmachine: (addons-214113)     
	I0916 17:09:59.179153  383672 main.go:141] libmachine: (addons-214113)   </features>
	I0916 17:09:59.179160  383672 main.go:141] libmachine: (addons-214113)   <cpu mode='host-passthrough'>
	I0916 17:09:59.179165  383672 main.go:141] libmachine: (addons-214113)   
	I0916 17:09:59.179173  383672 main.go:141] libmachine: (addons-214113)   </cpu>
	I0916 17:09:59.179179  383672 main.go:141] libmachine: (addons-214113)   <os>
	I0916 17:09:59.179185  383672 main.go:141] libmachine: (addons-214113)     <type>hvm</type>
	I0916 17:09:59.179190  383672 main.go:141] libmachine: (addons-214113)     <boot dev='cdrom'/>
	I0916 17:09:59.179196  383672 main.go:141] libmachine: (addons-214113)     <boot dev='hd'/>
	I0916 17:09:59.179202  383672 main.go:141] libmachine: (addons-214113)     <bootmenu enable='no'/>
	I0916 17:09:59.179207  383672 main.go:141] libmachine: (addons-214113)   </os>
	I0916 17:09:59.179225  383672 main.go:141] libmachine: (addons-214113)   <devices>
	I0916 17:09:59.179237  383672 main.go:141] libmachine: (addons-214113)     <disk type='file' device='cdrom'>
	I0916 17:09:59.179247  383672 main.go:141] libmachine: (addons-214113)       <source file='/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/boot2docker.iso'/>
	I0916 17:09:59.179256  383672 main.go:141] libmachine: (addons-214113)       <target dev='hdc' bus='scsi'/>
	I0916 17:09:59.179264  383672 main.go:141] libmachine: (addons-214113)       <readonly/>
	I0916 17:09:59.179274  383672 main.go:141] libmachine: (addons-214113)     </disk>
	I0916 17:09:59.179286  383672 main.go:141] libmachine: (addons-214113)     <disk type='file' device='disk'>
	I0916 17:09:59.179299  383672 main.go:141] libmachine: (addons-214113)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 17:09:59.179313  383672 main.go:141] libmachine: (addons-214113)       <source file='/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/addons-214113.rawdisk'/>
	I0916 17:09:59.179324  383672 main.go:141] libmachine: (addons-214113)       <target dev='hda' bus='virtio'/>
	I0916 17:09:59.179332  383672 main.go:141] libmachine: (addons-214113)     </disk>
	I0916 17:09:59.179339  383672 main.go:141] libmachine: (addons-214113)     <interface type='network'>
	I0916 17:09:59.179352  383672 main.go:141] libmachine: (addons-214113)       <source network='mk-addons-214113'/>
	I0916 17:09:59.179362  383672 main.go:141] libmachine: (addons-214113)       <model type='virtio'/>
	I0916 17:09:59.179370  383672 main.go:141] libmachine: (addons-214113)     </interface>
	I0916 17:09:59.179379  383672 main.go:141] libmachine: (addons-214113)     <interface type='network'>
	I0916 17:09:59.179388  383672 main.go:141] libmachine: (addons-214113)       <source network='default'/>
	I0916 17:09:59.179397  383672 main.go:141] libmachine: (addons-214113)       <model type='virtio'/>
	I0916 17:09:59.179407  383672 main.go:141] libmachine: (addons-214113)     </interface>
	I0916 17:09:59.179415  383672 main.go:141] libmachine: (addons-214113)     <serial type='pty'>
	I0916 17:09:59.179432  383672 main.go:141] libmachine: (addons-214113)       <target port='0'/>
	I0916 17:09:59.179448  383672 main.go:141] libmachine: (addons-214113)     </serial>
	I0916 17:09:59.179460  383672 main.go:141] libmachine: (addons-214113)     <console type='pty'>
	I0916 17:09:59.179473  383672 main.go:141] libmachine: (addons-214113)       <target type='serial' port='0'/>
	I0916 17:09:59.179488  383672 main.go:141] libmachine: (addons-214113)     </console>
	I0916 17:09:59.179493  383672 main.go:141] libmachine: (addons-214113)     <rng model='virtio'>
	I0916 17:09:59.179509  383672 main.go:141] libmachine: (addons-214113)       <backend model='random'>/dev/random</backend>
	I0916 17:09:59.179515  383672 main.go:141] libmachine: (addons-214113)     </rng>
	I0916 17:09:59.179519  383672 main.go:141] libmachine: (addons-214113)     
	I0916 17:09:59.179525  383672 main.go:141] libmachine: (addons-214113)     
	I0916 17:09:59.179530  383672 main.go:141] libmachine: (addons-214113)   </devices>
	I0916 17:09:59.179536  383672 main.go:141] libmachine: (addons-214113) </domain>
	I0916 17:09:59.179543  383672 main.go:141] libmachine: (addons-214113) 
	I0916 17:09:59.184726  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:08:54:ec in network default
	I0916 17:09:59.185223  383672 main.go:141] libmachine: (addons-214113) Ensuring networks are active...
	I0916 17:09:59.185244  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:09:59.185808  383672 main.go:141] libmachine: (addons-214113) Ensuring network default is active
	I0916 17:09:59.186112  383672 main.go:141] libmachine: (addons-214113) Ensuring network mk-addons-214113 is active
	I0916 17:09:59.186626  383672 main.go:141] libmachine: (addons-214113) Getting domain xml...
	I0916 17:09:59.187173  383672 main.go:141] libmachine: (addons-214113) Creating domain...
	I0916 17:10:00.515892  383672 main.go:141] libmachine: (addons-214113) Waiting to get IP...
	I0916 17:10:00.516702  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:00.517049  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:00.517089  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:00.517024  383694 retry.go:31] will retry after 309.488061ms: waiting for machine to come up
	I0916 17:10:00.828515  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:00.828924  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:00.828949  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:00.828882  383694 retry.go:31] will retry after 378.809371ms: waiting for machine to come up
	I0916 17:10:01.209465  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:01.209989  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:01.210020  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:01.209931  383694 retry.go:31] will retry after 478.459269ms: waiting for machine to come up
	I0916 17:10:01.689396  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:01.689886  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:01.689917  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:01.689833  383694 retry.go:31] will retry after 370.433368ms: waiting for machine to come up
	I0916 17:10:02.062254  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:02.062687  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:02.062718  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:02.062640  383694 retry.go:31] will retry after 749.209503ms: waiting for machine to come up
	I0916 17:10:02.813407  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:02.813838  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:02.813869  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:02.813784  383694 retry.go:31] will retry after 625.207891ms: waiting for machine to come up
	I0916 17:10:03.440133  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:03.440521  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:03.440543  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:03.440494  383694 retry.go:31] will retry after 742.15446ms: waiting for machine to come up
	I0916 17:10:04.184237  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:04.184548  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:04.184579  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:04.184494  383694 retry.go:31] will retry after 1.037886533s: waiting for machine to come up
	I0916 17:10:05.223509  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:05.223815  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:05.223847  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:05.223753  383694 retry.go:31] will retry after 1.65383992s: waiting for machine to come up
	I0916 17:10:06.879648  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:06.880045  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:06.880076  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:06.879988  383694 retry.go:31] will retry after 2.011054804s: waiting for machine to come up
	I0916 17:10:08.893110  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:08.893441  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:08.893470  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:08.893374  383694 retry.go:31] will retry after 2.277452636s: waiting for machine to come up
	I0916 17:10:11.173755  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:11.174197  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:11.174221  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:11.174156  383694 retry.go:31] will retry after 3.218385718s: waiting for machine to come up
	I0916 17:10:14.394023  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:14.394298  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:14.394325  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:14.394252  383694 retry.go:31] will retry after 3.065013222s: waiting for machine to come up
	I0916 17:10:17.462290  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:17.462715  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find current IP address of domain addons-214113 in network mk-addons-214113
	I0916 17:10:17.462736  383672 main.go:141] libmachine: (addons-214113) DBG | I0916 17:10:17.462680  383694 retry.go:31] will retry after 3.999500108s: waiting for machine to come up
	I0916 17:10:21.465528  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.465951  383672 main.go:141] libmachine: (addons-214113) Found IP for machine: 192.168.39.110
	I0916 17:10:21.465975  383672 main.go:141] libmachine: (addons-214113) Reserving static IP address...
	I0916 17:10:21.465988  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has current primary IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.466302  383672 main.go:141] libmachine: (addons-214113) DBG | unable to find host DHCP lease matching {name: "addons-214113", mac: "52:54:00:53:e2:c9", ip: "192.168.39.110"} in network mk-addons-214113
	I0916 17:10:21.534093  383672 main.go:141] libmachine: (addons-214113) DBG | Getting to WaitForSSH function...
	I0916 17:10:21.534121  383672 main.go:141] libmachine: (addons-214113) Reserved static IP address: 192.168.39.110
	I0916 17:10:21.534165  383672 main.go:141] libmachine: (addons-214113) Waiting for SSH to be available...
	I0916 17:10:21.536277  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.536673  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:21.536698  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.536887  383672 main.go:141] libmachine: (addons-214113) DBG | Using SSH client type: external
	I0916 17:10:21.536916  383672 main.go:141] libmachine: (addons-214113) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa (-rw-------)
	I0916 17:10:21.536947  383672 main.go:141] libmachine: (addons-214113) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 17:10:21.536961  383672 main.go:141] libmachine: (addons-214113) DBG | About to run SSH command:
	I0916 17:10:21.536973  383672 main.go:141] libmachine: (addons-214113) DBG | exit 0
	I0916 17:10:21.660492  383672 main.go:141] libmachine: (addons-214113) DBG | SSH cmd err, output: <nil>: 
	I0916 17:10:21.660826  383672 main.go:141] libmachine: (addons-214113) KVM machine creation complete!
	I0916 17:10:21.661025  383672 main.go:141] libmachine: (addons-214113) Calling .GetConfigRaw
	I0916 17:10:21.661596  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:21.661824  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:21.662067  383672 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 17:10:21.662083  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:21.663317  383672 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 17:10:21.663334  383672 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 17:10:21.663342  383672 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 17:10:21.663361  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:21.665257  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.665572  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:21.665602  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.665712  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:21.665892  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.666018  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.666130  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:21.666275  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:21.666464  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:21.666474  383672 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 17:10:21.767606  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 17:10:21.767633  383672 main.go:141] libmachine: Detecting the provisioner...
	I0916 17:10:21.767645  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:21.770018  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.770363  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:21.770393  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.770479  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:21.770661  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.770812  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.770948  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:21.771073  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:21.771270  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:21.771286  383672 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 17:10:21.876677  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 17:10:21.876744  383672 main.go:141] libmachine: found compatible host: buildroot
	I0916 17:10:21.876758  383672 main.go:141] libmachine: Provisioning with buildroot...
	I0916 17:10:21.876771  383672 main.go:141] libmachine: (addons-214113) Calling .GetMachineName
	I0916 17:10:21.876970  383672 buildroot.go:166] provisioning hostname "addons-214113"
	I0916 17:10:21.876995  383672 main.go:141] libmachine: (addons-214113) Calling .GetMachineName
	I0916 17:10:21.877170  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:21.879306  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.879653  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:21.879688  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.879780  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:21.879950  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.880131  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.880263  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:21.880437  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:21.880587  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:21.880598  383672 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-214113 && echo "addons-214113" | sudo tee /etc/hostname
	I0916 17:10:21.996437  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-214113
	
	I0916 17:10:21.996459  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:21.998695  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.999005  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:21.999066  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:21.999134  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:21.999314  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.999483  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:21.999626  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:21.999792  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:21.999985  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:22.000002  383672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-214113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-214113/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-214113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 17:10:22.111382  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 17:10:22.111406  383672 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-375661/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-375661/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-375661/.minikube}
	I0916 17:10:22.111449  383672 buildroot.go:174] setting up certificates
	I0916 17:10:22.111467  383672 provision.go:84] configureAuth start
	I0916 17:10:22.111481  383672 main.go:141] libmachine: (addons-214113) Calling .GetMachineName
	I0916 17:10:22.111678  383672 main.go:141] libmachine: (addons-214113) Calling .GetIP
	I0916 17:10:22.114028  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.114385  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:22.114409  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.114527  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:22.116974  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.117323  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:22.117347  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.117456  383672 provision.go:143] copyHostCerts
	I0916 17:10:22.117528  383672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-375661/.minikube/key.pem (1675 bytes)
	I0916 17:10:22.117633  383672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-375661/.minikube/ca.pem (1078 bytes)
	I0916 17:10:22.117693  383672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-375661/.minikube/cert.pem (1123 bytes)
	I0916 17:10:22.117740  383672 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-375661/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca-key.pem org=jenkins.addons-214113 san=[127.0.0.1 192.168.39.110 addons-214113 localhost minikube]
	I0916 17:10:22.261007  383672 provision.go:177] copyRemoteCerts
	I0916 17:10:22.261070  383672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 17:10:22.261087  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:22.263282  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.263584  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:22.263608  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.263763  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:22.263932  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.264068  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:22.264173  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:22.344828  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 17:10:22.365194  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 17:10:22.385183  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 17:10:22.405334  383672 provision.go:87] duration metric: took 293.856575ms to configureAuth
	I0916 17:10:22.405356  383672 buildroot.go:189] setting minikube options for container-runtime
	I0916 17:10:22.405527  383672 config.go:182] Loaded profile config "addons-214113": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:10:22.405550  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:22.405781  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:22.407975  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.408345  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:22.408367  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.408540  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:22.408692  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.408814  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.408961  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:22.409150  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:22.409299  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:22.409310  383672 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0916 17:10:22.513097  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0916 17:10:22.513118  383672 buildroot.go:70] root file system type: tmpfs
	I0916 17:10:22.513247  383672 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0916 17:10:22.513269  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:22.515525  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.515806  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:22.515827  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.516002  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:22.516188  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.516325  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.516446  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:22.516590  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:22.516792  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:22.516899  383672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0916 17:10:22.631863  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0916 17:10:22.631893  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:22.634212  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.634488  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:22.634511  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:22.634600  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:22.634776  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.634928  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:22.635085  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:22.635209  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:22.635361  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:22.635377  383672 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0916 17:10:24.307062  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0916 17:10:24.307090  383672 main.go:141] libmachine: Checking connection to Docker...
	I0916 17:10:24.307100  383672 main.go:141] libmachine: (addons-214113) Calling .GetURL
	I0916 17:10:24.308420  383672 main.go:141] libmachine: (addons-214113) DBG | Using libvirt version 6000000
	I0916 17:10:24.310729  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.311063  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.311102  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.311261  383672 main.go:141] libmachine: Docker is up and running!
	I0916 17:10:24.311273  383672 main.go:141] libmachine: Reticulating splines...
	I0916 17:10:24.311281  383672 client.go:171] duration metric: took 26.074497621s to LocalClient.Create
	I0916 17:10:24.311306  383672 start.go:167] duration metric: took 26.074571311s to libmachine.API.Create "addons-214113"
	I0916 17:10:24.311320  383672 start.go:293] postStartSetup for "addons-214113" (driver="kvm2")
	I0916 17:10:24.311334  383672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 17:10:24.311358  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:24.311608  383672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 17:10:24.311633  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:24.313821  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.314133  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.314156  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.314307  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:24.314489  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:24.314623  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:24.314759  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:24.393554  383672 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 17:10:24.397155  383672 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 17:10:24.397174  383672 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-375661/.minikube/addons for local assets ...
	I0916 17:10:24.397240  383672 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-375661/.minikube/files for local assets ...
	I0916 17:10:24.397274  383672 start.go:296] duration metric: took 85.944658ms for postStartSetup
	I0916 17:10:24.397308  383672 main.go:141] libmachine: (addons-214113) Calling .GetConfigRaw
	I0916 17:10:24.397839  383672 main.go:141] libmachine: (addons-214113) Calling .GetIP
	I0916 17:10:24.400876  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.401296  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.401322  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.401590  383672 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/config.json ...
	I0916 17:10:24.401786  383672 start.go:128] duration metric: took 26.181459767s to createHost
	I0916 17:10:24.401808  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:24.404017  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.404289  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.404315  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.404471  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:24.404633  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:24.404765  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:24.404861  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:24.404982  383672 main.go:141] libmachine: Using SSH client type: native
	I0916 17:10:24.405163  383672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0916 17:10:24.405174  383672 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 17:10:24.508418  383672 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726506624.480315000
	
	I0916 17:10:24.508439  383672 fix.go:216] guest clock: 1726506624.480315000
	I0916 17:10:24.508451  383672 fix.go:229] Guest: 2024-09-16 17:10:24.480315 +0000 UTC Remote: 2024-09-16 17:10:24.401798122 +0000 UTC m=+26.274772341 (delta=78.516878ms)
	I0916 17:10:24.508513  383672 fix.go:200] guest clock delta is within tolerance: 78.516878ms
	I0916 17:10:24.508521  383672 start.go:83] releasing machines lock for "addons-214113", held for 26.288270052s
	I0916 17:10:24.508548  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:24.508733  383672 main.go:141] libmachine: (addons-214113) Calling .GetIP
	I0916 17:10:24.510912  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.511188  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.511203  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.511345  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:24.511725  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:24.511869  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:24.511949  383672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 17:10:24.511996  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:24.512049  383672 ssh_runner.go:195] Run: cat /version.json
	I0916 17:10:24.512074  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:24.514325  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.514602  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.514630  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.514648  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.514749  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:24.514922  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:24.515032  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:24.515037  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:24.515056  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:24.515136  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:24.515206  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:24.515334  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:24.515482  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:24.515615  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:24.620272  383672 ssh_runner.go:195] Run: systemctl --version
	I0916 17:10:24.625367  383672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 17:10:24.630042  383672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 17:10:24.630116  383672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 17:10:24.645186  383672 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 17:10:24.645215  383672 start.go:495] detecting cgroup driver to use...
	I0916 17:10:24.645340  383672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:10:24.660915  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 17:10:24.669716  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 17:10:24.678555  383672 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 17:10:24.678609  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 17:10:24.687533  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 17:10:24.696358  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 17:10:24.705081  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 17:10:24.714074  383672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 17:10:24.723175  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 17:10:24.731925  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 17:10:24.740772  383672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 17:10:24.749697  383672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 17:10:24.757798  383672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 17:10:24.765814  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:24.868600  383672 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 17:10:24.892569  383672 start.go:495] detecting cgroup driver to use...
	I0916 17:10:24.892643  383672 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0916 17:10:24.907263  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 17:10:24.918957  383672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 17:10:24.936415  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 17:10:24.948490  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 17:10:24.960247  383672 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 17:10:24.987313  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 17:10:24.999097  383672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:10:25.014998  383672 ssh_runner.go:195] Run: which cri-dockerd
	I0916 17:10:25.018326  383672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0916 17:10:25.026339  383672 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0916 17:10:25.042610  383672 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0916 17:10:25.147821  383672 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0916 17:10:25.257863  383672 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0916 17:10:25.257985  383672 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0916 17:10:25.272503  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:25.380434  383672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 17:10:27.680273  383672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.299800936s)
	I0916 17:10:27.680358  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0916 17:10:27.692252  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 17:10:27.703953  383672 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0916 17:10:27.806804  383672 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0916 17:10:27.918667  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:28.032014  383672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0916 17:10:28.046235  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0916 17:10:28.057890  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:28.161198  383672 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0916 17:10:28.232125  383672 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0916 17:10:28.232239  383672 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0916 17:10:28.236972  383672 start.go:563] Will wait 60s for crictl version
	I0916 17:10:28.237025  383672 ssh_runner.go:195] Run: which crictl
	I0916 17:10:28.240795  383672 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 17:10:28.274394  383672 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0916 17:10:28.274472  383672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 17:10:28.297471  383672 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0916 17:10:28.319379  383672 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0916 17:10:28.319419  383672 main.go:141] libmachine: (addons-214113) Calling .GetIP
	I0916 17:10:28.321999  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:28.322302  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:28.322326  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:28.322522  383672 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 17:10:28.325764  383672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 17:10:28.336521  383672 kubeadm.go:883] updating cluster {Name:addons-214113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-214113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 17:10:28.336633  383672 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:10:28.336677  383672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 17:10:28.348340  383672 docker.go:685] Got preloaded images: 
	I0916 17:10:28.348358  383672 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0916 17:10:28.348388  383672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 17:10:28.356684  383672 ssh_runner.go:195] Run: which lz4
	I0916 17:10:28.359901  383672 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 17:10:28.363204  383672 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 17:10:28.363235  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0916 17:10:29.315053  383672 docker.go:649] duration metric: took 955.183466ms to copy over tarball
	I0916 17:10:29.315127  383672 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 17:10:31.008823  383672 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.693641533s)
	I0916 17:10:31.008854  383672 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 17:10:31.044314  383672 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0916 17:10:31.053513  383672 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0916 17:10:31.068118  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:31.175372  383672 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0916 17:10:35.372909  383672 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.197496247s)
	I0916 17:10:35.373004  383672 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0916 17:10:35.392565  383672 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0916 17:10:35.392594  383672 cache_images.go:84] Images are preloaded, skipping loading
	I0916 17:10:35.392606  383672 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 docker true true} ...
	I0916 17:10:35.392735  383672 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-214113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-214113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 17:10:35.392788  383672 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0916 17:10:35.444610  383672 cni.go:84] Creating CNI manager for ""
	I0916 17:10:35.444648  383672 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:10:35.444662  383672 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 17:10:35.444689  383672 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-214113 NodeName:addons-214113 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 17:10:35.444901  383672 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-214113"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 17:10:35.444984  383672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 17:10:35.454141  383672 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 17:10:35.454208  383672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 17:10:35.462629  383672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0916 17:10:35.476900  383672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 17:10:35.490822  383672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0916 17:10:35.504806  383672 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0916 17:10:35.507983  383672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 17:10:35.518507  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:35.625445  383672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 17:10:35.644231  383672 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113 for IP: 192.168.39.110
	I0916 17:10:35.644256  383672 certs.go:194] generating shared ca certs ...
	I0916 17:10:35.644281  383672 certs.go:226] acquiring lock for ca certs: {Name:mkc8fa16a52fc35f60c0deee861c713b3f648ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:35.644461  383672 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-375661/.minikube/ca.key
	I0916 17:10:35.899010  383672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-375661/.minikube/ca.crt ...
	I0916 17:10:35.899037  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/ca.crt: {Name:mk42baf02cc8ce0449ad13a0ded7ad231ab2faeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:35.899215  383672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-375661/.minikube/ca.key ...
	I0916 17:10:35.899232  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/ca.key: {Name:mk797ea1df0f0c588a52796ba10caaf84c58205f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:35.899331  383672 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.key
	I0916 17:10:36.193863  383672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.crt ...
	I0916 17:10:36.193894  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.crt: {Name:mk4fde7b25abec9549d687aa5cb27af04346bc13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.194056  383672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.key ...
	I0916 17:10:36.194067  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.key: {Name:mke9312687bc4106d1468b4306c0498a912f8fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.194133  383672 certs.go:256] generating profile certs ...
	I0916 17:10:36.194190  383672 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.key
	I0916 17:10:36.194212  383672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt with IP's: []
	I0916 17:10:36.465702  383672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt ...
	I0916 17:10:36.465733  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: {Name:mk46abba34174592f4fb208ba47e3477ef86f254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.465912  383672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.key ...
	I0916 17:10:36.465924  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.key: {Name:mkc3ace085401dbda7b4fc54caab3fea7130320a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.466003  383672 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.key.86c5aee7
	I0916 17:10:36.466023  383672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.crt.86c5aee7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110]
	I0916 17:10:36.623715  383672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.crt.86c5aee7 ...
	I0916 17:10:36.623746  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.crt.86c5aee7: {Name:mk5e64cac8945282870673732868fce36d36989b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.623905  383672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.key.86c5aee7 ...
	I0916 17:10:36.623920  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.key.86c5aee7: {Name:mk1d16c009edb5cd1c77e904ec0a08bf06026e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.623997  383672 certs.go:381] copying /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.crt.86c5aee7 -> /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.crt
	I0916 17:10:36.624076  383672 certs.go:385] copying /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.key.86c5aee7 -> /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.key
	I0916 17:10:36.624128  383672 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.key
	I0916 17:10:36.624147  383672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.crt with IP's: []
	I0916 17:10:36.776781  383672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.crt ...
	I0916 17:10:36.776805  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.crt: {Name:mk5f7c5451b80dfa064ca5a22cd912919c5dc8e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.776929  383672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.key ...
	I0916 17:10:36.776941  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.key: {Name:mkfd2229ce4ef23c4672663932eedc89554d8ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:36.777139  383672 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 17:10:36.777176  383672 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/ca.pem (1078 bytes)
	I0916 17:10:36.777201  383672 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/cert.pem (1123 bytes)
	I0916 17:10:36.777227  383672 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-375661/.minikube/certs/key.pem (1675 bytes)
	I0916 17:10:36.777873  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 17:10:36.800095  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 17:10:36.820073  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 17:10:36.840009  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 17:10:36.860032  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 17:10:36.881116  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 17:10:36.901939  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 17:10:36.922906  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 17:10:36.943819  383672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-375661/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 17:10:36.964643  383672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 17:10:36.979182  383672 ssh_runner.go:195] Run: openssl version
	I0916 17:10:36.984136  383672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 17:10:36.996539  383672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:10:37.000928  383672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:10 /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:10:37.000987  383672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:10:37.009111  383672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 17:10:37.023057  383672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 17:10:37.028125  383672 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 17:10:37.028170  383672 kubeadm.go:392] StartCluster: {Name:addons-214113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-214113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:10:37.028283  383672 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0916 17:10:37.046436  383672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 17:10:37.054721  383672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 17:10:37.062879  383672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 17:10:37.071054  383672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 17:10:37.071075  383672 kubeadm.go:157] found existing configuration files:
	
	I0916 17:10:37.071118  383672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 17:10:37.078694  383672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 17:10:37.078771  383672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 17:10:37.086924  383672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 17:10:37.094720  383672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 17:10:37.094757  383672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 17:10:37.102861  383672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 17:10:37.110452  383672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 17:10:37.110508  383672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 17:10:37.118270  383672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 17:10:37.125858  383672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 17:10:37.125893  383672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 17:10:37.133786  383672 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 17:10:37.177847  383672 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 17:10:37.177916  383672 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 17:10:37.282475  383672 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 17:10:37.282621  383672 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 17:10:37.282725  383672 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 17:10:37.296109  383672 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 17:10:37.298209  383672 out.go:235]   - Generating certificates and keys ...
	I0916 17:10:37.298296  383672 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 17:10:37.298371  383672 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 17:10:37.485582  383672 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 17:10:37.781510  383672 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 17:10:37.927703  383672 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 17:10:38.072779  383672 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 17:10:38.280611  383672 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 17:10:38.280766  383672 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-214113 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0916 17:10:38.339157  383672 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 17:10:38.339371  383672 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-214113 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I0916 17:10:38.521253  383672 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 17:10:38.688815  383672 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 17:10:38.925132  383672 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 17:10:38.925277  383672 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 17:10:39.088133  383672 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 17:10:39.339539  383672 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 17:10:39.702006  383672 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 17:10:39.766970  383672 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 17:10:39.856554  383672 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 17:10:39.857261  383672 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 17:10:39.859569  383672 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 17:10:39.861461  383672 out.go:235]   - Booting up control plane ...
	I0916 17:10:39.861591  383672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 17:10:39.861691  383672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 17:10:39.861783  383672 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 17:10:39.875758  383672 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 17:10:39.881425  383672 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 17:10:39.881506  383672 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 17:10:39.996350  383672 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 17:10:39.996509  383672 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 17:10:40.497955  383672 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.697039ms
	I0916 17:10:40.498076  383672 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 17:10:44.998623  383672 kubeadm.go:310] [api-check] The API server is healthy after 4.501055424s
	I0916 17:10:45.013682  383672 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 17:10:45.023645  383672 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 17:10:45.052425  383672 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 17:10:45.052694  383672 kubeadm.go:310] [mark-control-plane] Marking the node addons-214113 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 17:10:45.064565  383672 kubeadm.go:310] [bootstrap-token] Using token: zbjja2.u65t7fgjew3kmgtg
	I0916 17:10:45.065757  383672 out.go:235]   - Configuring RBAC rules ...
	I0916 17:10:45.065877  383672 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 17:10:45.070550  383672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 17:10:45.077459  383672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 17:10:45.080085  383672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 17:10:45.082912  383672 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 17:10:45.087697  383672 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 17:10:45.405242  383672 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 17:10:45.829388  383672 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 17:10:46.405155  383672 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 17:10:46.406129  383672 kubeadm.go:310] 
	I0916 17:10:46.406242  383672 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 17:10:46.406260  383672 kubeadm.go:310] 
	I0916 17:10:46.406337  383672 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 17:10:46.406344  383672 kubeadm.go:310] 
	I0916 17:10:46.406365  383672 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 17:10:46.406461  383672 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 17:10:46.406551  383672 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 17:10:46.406566  383672 kubeadm.go:310] 
	I0916 17:10:46.406643  383672 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 17:10:46.406653  383672 kubeadm.go:310] 
	I0916 17:10:46.406732  383672 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 17:10:46.406749  383672 kubeadm.go:310] 
	I0916 17:10:46.406822  383672 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 17:10:46.406887  383672 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 17:10:46.406996  383672 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 17:10:46.407008  383672 kubeadm.go:310] 
	I0916 17:10:46.407117  383672 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 17:10:46.407234  383672 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 17:10:46.407246  383672 kubeadm.go:310] 
	I0916 17:10:46.407361  383672 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zbjja2.u65t7fgjew3kmgtg \
	I0916 17:10:46.407503  383672 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0cdbc041d2bee0afdf8a1c7674628b5fe51701d33dd9a4ff4813ccf5b1cd942 \
	I0916 17:10:46.407552  383672 kubeadm.go:310] 	--control-plane 
	I0916 17:10:46.407567  383672 kubeadm.go:310] 
	I0916 17:10:46.407694  383672 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 17:10:46.407702  383672 kubeadm.go:310] 
	I0916 17:10:46.407827  383672 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zbjja2.u65t7fgjew3kmgtg \
	I0916 17:10:46.407975  383672 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b0cdbc041d2bee0afdf8a1c7674628b5fe51701d33dd9a4ff4813ccf5b1cd942 
	I0916 17:10:46.408568  383672 kubeadm.go:310] W0916 17:10:37.149152    1516 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:10:46.408853  383672 kubeadm.go:310] W0916 17:10:37.149862    1516 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:10:46.408977  383672 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 17:10:46.409019  383672 cni.go:84] Creating CNI manager for ""
	I0916 17:10:46.409049  383672 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:10:46.411068  383672 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 17:10:46.411961  383672 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 17:10:46.421613  383672 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 17:10:46.438502  383672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 17:10:46.438623  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:46.438644  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-214113 minikube.k8s.io/updated_at=2024_09_16T17_10_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=addons-214113 minikube.k8s.io/primary=true
	I0916 17:10:46.448126  383672 ops.go:34] apiserver oom_adj: -16
	I0916 17:10:46.555775  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:47.055990  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:47.555977  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:48.056069  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:48.555937  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:49.055944  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:49.556400  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:50.056815  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:50.556024  383672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:10:50.628329  383672 kubeadm.go:1113] duration metric: took 4.189766078s to wait for elevateKubeSystemPrivileges
	I0916 17:10:50.628381  383672 kubeadm.go:394] duration metric: took 13.600216348s to StartCluster
	I0916 17:10:50.628414  383672 settings.go:142] acquiring lock: {Name:mk90ac4bb17bc24345f647de1b9960bdc5512e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:50.628551  383672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-375661/kubeconfig
	I0916 17:10:50.628928  383672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/kubeconfig: {Name:mk6abc7b7ad6c1805a689a3c701cdf1215e02a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:10:50.629168  383672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 17:10:50.629206  383672 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0916 17:10:50.629328  383672 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 17:10:50.629474  383672 config.go:182] Loaded profile config "addons-214113": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:10:50.629483  383672 addons.go:69] Setting ingress=true in profile "addons-214113"
	I0916 17:10:50.629498  383672 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-214113"
	I0916 17:10:50.629505  383672 addons.go:69] Setting registry=true in profile "addons-214113"
	I0916 17:10:50.629516  383672 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-214113"
	I0916 17:10:50.629476  383672 addons.go:69] Setting yakd=true in profile "addons-214113"
	I0916 17:10:50.629525  383672 addons.go:234] Setting addon registry=true in "addons-214113"
	I0916 17:10:50.629533  383672 addons.go:69] Setting cloud-spanner=true in profile "addons-214113"
	I0916 17:10:50.629528  383672 addons.go:69] Setting inspektor-gadget=true in profile "addons-214113"
	I0916 17:10:50.629559  383672 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-214113"
	I0916 17:10:50.629573  383672 addons.go:234] Setting addon inspektor-gadget=true in "addons-214113"
	I0916 17:10:50.629523  383672 addons.go:234] Setting addon ingress=true in "addons-214113"
	I0916 17:10:50.629603  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.629611  383672 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-214113"
	I0916 17:10:50.629574  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.629637  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.629647  383672 addons.go:69] Setting gcp-auth=true in profile "addons-214113"
	I0916 17:10:50.629681  383672 mustload.go:65] Loading cluster: addons-214113
	I0916 17:10:50.629843  383672 config.go:182] Loaded profile config "addons-214113": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:10:50.629996  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.629551  383672 addons.go:234] Setting addon cloud-spanner=true in "addons-214113"
	I0916 17:10:50.630031  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.630047  383672 addons.go:69] Setting metrics-server=true in profile "addons-214113"
	I0916 17:10:50.630064  383672 addons.go:234] Setting addon metrics-server=true in "addons-214113"
	I0916 17:10:50.630067  383672 addons.go:69] Setting ingress-dns=true in profile "addons-214113"
	I0916 17:10:50.630077  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.630039  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.630098  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.630113  383672 addons.go:69] Setting helm-tiller=true in profile "addons-214113"
	I0916 17:10:50.630125  383672 addons.go:234] Setting addon helm-tiller=true in "addons-214113"
	I0916 17:10:50.629490  383672 addons.go:69] Setting storage-provisioner=true in profile "addons-214113"
	I0916 17:10:50.629613  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.630150  383672 addons.go:234] Setting addon storage-provisioner=true in "addons-214113"
	I0916 17:10:50.630064  383672 addons.go:69] Setting volumesnapshots=true in profile "addons-214113"
	I0916 17:10:50.630168  383672 addons.go:234] Setting addon volumesnapshots=true in "addons-214113"
	I0916 17:10:50.630191  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.630212  383672 addons.go:69] Setting default-storageclass=true in profile "addons-214113"
	I0916 17:10:50.630232  383672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-214113"
	I0916 17:10:50.629537  383672 addons.go:234] Setting addon yakd=true in "addons-214113"
	I0916 17:10:50.630701  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.630036  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.630976  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.630990  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.631007  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.631024  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.630084  383672 addons.go:234] Setting addon ingress-dns=true in "addons-214113"
	I0916 17:10:50.631130  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.630067  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.630050  383672 addons.go:69] Setting volcano=true in profile "addons-214113"
	I0916 17:10:50.631165  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.631438  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.630101  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631514  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.631195  383672 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-214113"
	I0916 17:10:50.631547  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631552  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631567  383672 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-214113"
	I0916 17:10:50.631593  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631621  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.631916  383672 out.go:177] * Verifying Kubernetes components...
	I0916 17:10:50.632013  383672 addons.go:234] Setting addon volcano=true in "addons-214113"
	I0916 17:10:50.632059  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.632080  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.632123  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631205  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.633633  383672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:10:50.631281  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.631181  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631306  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.634762  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.634945  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.634992  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631375  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.635914  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.635944  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.631299  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.637418  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.652435  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I0916 17:10:50.652646  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43357
	I0916 17:10:50.652965  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0916 17:10:50.653448  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.653460  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.653751  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0916 17:10:50.653998  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.654024  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.654042  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.654496  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.654575  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.654603  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.654622  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.654748  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.654785  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.655027  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.655028  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.655053  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.655328  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.655378  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.655490  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.657202  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.660297  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I0916 17:10:50.665767  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.665817  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.666570  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.666641  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.667592  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.668245  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.668377  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.668426  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.668572  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.668626  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.677583  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0916 17:10:50.677693  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I0916 17:10:50.677752  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0916 17:10:50.677966  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.678129  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.678209  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.678992  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.679019  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.679152  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.679162  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.678876  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.679211  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.679624  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.679714  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.679749  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.680245  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.680276  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.680495  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.681176  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.681219  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.681900  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.682382  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.682410  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.685547  383672 addons.go:234] Setting addon default-storageclass=true in "addons-214113"
	I0916 17:10:50.685564  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.685603  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.685838  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.686074  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.686111  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.687704  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.688161  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.688191  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.689176  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0916 17:10:50.689182  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35047
	I0916 17:10:50.693592  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.702617  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0916 17:10:50.703002  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.704181  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38627
	I0916 17:10:50.704628  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.707297  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0916 17:10:50.707789  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.710224  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0916 17:10:50.710693  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.710786  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0916 17:10:50.714108  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0916 17:10:50.714721  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.714773  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.714926  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.714941  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.714994  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0916 17:10:50.715088  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I0916 17:10:50.715271  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.715285  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.715428  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.715440  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.715565  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.715574  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.715687  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.715696  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.715750  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.716397  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.716493  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.716526  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.716646  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.716658  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.716823  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.716835  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.716898  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.716955  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.717418  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.717453  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.717942  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.718001  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.718106  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.718116  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.718157  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.718885  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.718941  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.719085  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.719101  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.719170  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.719244  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.719286  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.719326  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.719364  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.720242  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.720261  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.720295  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.720337  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.721095  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.721108  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.721158  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.721360  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.721605  383672 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 17:10:50.721739  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.721792  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.721807  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.721852  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.722114  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.722836  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0916 17:10:50.723416  383672 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 17:10:50.723436  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 17:10:50.723455  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.724008  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 17:10:50.724144  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 17:10:50.724397  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.725045  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0916 17:10:50.725453  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41899
	I0916 17:10:50.725802  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.725869  383672 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 17:10:50.726203  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.726564  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.726587  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.726607  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 17:10:50.726647  383672 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 17:10:50.727128  383672 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 17:10:50.727148  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.726707  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.727198  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.727346  383672 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 17:10:50.727362  383672 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 17:10:50.727379  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.727618  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.727853  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.728072  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.728168  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.728209  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.729476  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 17:10:50.730963  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.731004  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 17:10:50.731482  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.731682  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.731886  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.732091  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.732250  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.732405  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.733299  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 17:10:50.733869  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.734096  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.734486  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0916 17:10:50.734539  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.734561  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.734852  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.734986  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.735084  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.735116  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.735127  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.735194  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.735382  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.735487  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.735554  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.735630  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.735854  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 17:10:50.736167  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.737374  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 17:10:50.737419  383672 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 17:10:50.738427  383672 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 17:10:50.738443  383672 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 17:10:50.738463  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.739670  383672 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 17:10:50.739953  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0916 17:10:50.740708  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 17:10:50.740734  383672 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 17:10:50.740753  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.742176  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0916 17:10:50.743066  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.743628  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.743652  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.743843  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.744007  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.744139  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.744335  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.744410  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.744591  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.744714  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.744755  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.744902  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.745081  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.745197  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.746048  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33985
	I0916 17:10:50.746902  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I0916 17:10:50.749790  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.749870  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.749900  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.749909  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.750217  383672 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-214113"
	I0916 17:10:50.750261  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:50.750265  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.750636  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.750667  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.750691  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.750707  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.750770  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.750880  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.750887  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.750992  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.751002  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.751208  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.751267  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.751969  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.751994  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.751997  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.752009  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.751971  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.752048  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.752201  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.752348  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.752416  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.752479  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.753112  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.753167  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.753382  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.753459  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.753560  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.753602  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.753617  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.754293  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.755002  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.755043  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.755589  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.755981  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.756704  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.757439  383672 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 17:10:50.757505  383672 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 17:10:50.758140  383672 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 17:10:50.759057  383672 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 17:10:50.759076  383672 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 17:10:50.759088  383672 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:10:50.759093  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.759098  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 17:10:50.759109  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.760103  383672 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 17:10:50.761129  383672 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 17:10:50.761146  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 17:10:50.761162  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.762996  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.763648  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.764001  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.764026  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.764077  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.764161  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.764175  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.764292  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.764350  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.764506  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.764545  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.764585  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.764605  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.764648  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.764720  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.764765  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.764771  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.765174  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.765225  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.765331  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.765435  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.781167  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35907
	I0916 17:10:50.781247  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0916 17:10:50.781335  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I0916 17:10:50.781416  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0916 17:10:50.781937  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.782002  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.782104  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.782161  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0916 17:10:50.782413  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.782431  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.782551  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.782597  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.782608  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.782668  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.782685  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.782750  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.783053  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.783082  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.783109  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.783150  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.783712  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:50.783747  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:50.783891  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.783904  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.783910  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.783926  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.783963  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.783963  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0916 17:10:50.784390  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.784515  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.784645  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.785107  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.785124  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.785212  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.785501  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.785702  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.786128  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.786274  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.786564  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.786704  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.787237  383672 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 17:10:50.787926  383672 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 17:10:50.788013  383672 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 17:10:50.788706  383672 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 17:10:50.788724  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 17:10:50.788738  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.788790  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.789236  383672 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 17:10:50.789256  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 17:10:50.789274  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.789488  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0916 17:10:50.789914  383672 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 17:10:50.790000  383672 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 17:10:50.790037  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.790152  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.790231  383672 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 17:10:50.790241  383672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 17:10:50.790255  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.790669  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.790691  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.791285  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.791306  383672 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:10:50.791319  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 17:10:50.791338  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.791466  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:50.792455  383672 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 17:10:50.792964  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.793711  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.793750  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.794082  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.794275  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.794615  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.794796  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.794841  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.794865  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.794950  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.795109  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.795112  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.795330  383672 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:10:50.795439  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.795575  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.795669  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.795958  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.796250  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.796291  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.796508  383672 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:10:50.796539  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 17:10:50.796553  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.796518  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.796743  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.797021  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.797185  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.797420  383672 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 17:10:50.797710  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	W0916 17:10:50.797849  383672 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38512->192.168.39.110:22: read: connection reset by peer
	I0916 17:10:50.797876  383672 retry.go:31] will retry after 373.837433ms: ssh: handshake failed: read tcp 192.168.39.1:38512->192.168.39.110:22: read: connection reset by peer
	I0916 17:10:50.798065  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.798088  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.798265  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.798488  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.798619  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.798775  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.799288  383672 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:10:50.799913  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.800309  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.800327  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.800432  383672 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 17:10:50.800470  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 17:10:50.800492  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.800539  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.800935  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.801186  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.801399  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.803213  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.803636  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.803668  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.803826  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.803971  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.804030  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I0916 17:10:50.804237  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.804372  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:50.804403  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:50.804822  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:50.804843  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:50.805203  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:50.805344  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	W0916 17:10:50.805802  383672 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38540->192.168.39.110:22: read: connection reset by peer
	I0916 17:10:50.805823  383672 retry.go:31] will retry after 320.514276ms: ssh: handshake failed: read tcp 192.168.39.1:38540->192.168.39.110:22: read: connection reset by peer
	I0916 17:10:50.806622  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:50.807923  383672 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 17:10:50.808818  383672 out.go:177]   - Using image docker.io/busybox:stable
	I0916 17:10:50.809770  383672 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:10:50.809781  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 17:10:50.809793  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:50.812029  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.812265  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:50.812287  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:50.812415  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:50.812577  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:50.812689  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:50.812811  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	W0916 17:10:50.813422  383672 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:38544->192.168.39.110:22: read: connection reset by peer
	I0916 17:10:50.813440  383672 retry.go:31] will retry after 129.273825ms: ssh: handshake failed: read tcp 192.168.39.1:38544->192.168.39.110:22: read: connection reset by peer
	I0916 17:10:50.991031  383672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 17:10:50.991042  383672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 17:10:51.037902  383672 node_ready.go:35] waiting up to 6m0s for node "addons-214113" to be "Ready" ...
	I0916 17:10:51.040978  383672 node_ready.go:49] node "addons-214113" has status "Ready":"True"
	I0916 17:10:51.041002  383672 node_ready.go:38] duration metric: took 3.068527ms for node "addons-214113" to be "Ready" ...
	I0916 17:10:51.041013  383672 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:10:51.047638  383672 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:51.053249  383672 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 17:10:51.053274  383672 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 17:10:51.053647  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 17:10:51.087746  383672 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 17:10:51.087770  383672 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 17:10:51.088951  383672 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 17:10:51.088971  383672 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 17:10:51.138132  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:10:51.159500  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:10:51.173355  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:10:51.190338  383672 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 17:10:51.190365  383672 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 17:10:51.194528  383672 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 17:10:51.194547  383672 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 17:10:51.208573  383672 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 17:10:51.208597  383672 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 17:10:51.219068  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 17:10:51.219083  383672 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 17:10:51.219559  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 17:10:51.254590  383672 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 17:10:51.254613  383672 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 17:10:51.269866  383672 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:10:51.269884  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 17:10:51.295061  383672 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 17:10:51.295082  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 17:10:51.345061  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 17:10:51.350611  383672 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 17:10:51.350625  383672 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 17:10:51.377404  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 17:10:51.377428  383672 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 17:10:51.385116  383672 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 17:10:51.385137  383672 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 17:10:51.484622  383672 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:10:51.484658  383672 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 17:10:51.503655  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:10:51.509674  383672 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 17:10:51.509700  383672 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 17:10:51.524953  383672 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 17:10:51.524973  383672 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 17:10:51.538864  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:10:51.553729  383672 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:10:51.553745  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 17:10:51.587096  383672 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 17:10:51.587123  383672 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 17:10:51.616671  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 17:10:51.616696  383672 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 17:10:51.667842  383672 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 17:10:51.667869  383672 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 17:10:51.681785  383672 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:10:51.681802  383672 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 17:10:51.690582  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 17:10:51.707429  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:10:51.734839  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 17:10:51.772203  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 17:10:51.772229  383672 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 17:10:51.783636  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 17:10:51.783656  383672 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 17:10:51.800239  383672 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 17:10:51.800256  383672 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 17:10:51.890735  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:10:51.931431  383672 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:10:51.931464  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 17:10:51.948525  383672 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 17:10:51.948545  383672 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 17:10:51.973189  383672 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 17:10:51.973214  383672 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 17:10:52.175446  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:10:52.198413  383672 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 17:10:52.198440  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 17:10:52.237473  383672 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:10:52.237505  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 17:10:52.415011  383672 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 17:10:52.415049  383672 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 17:10:52.440904  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:10:52.619424  383672 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 17:10:52.619452  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 17:10:52.903979  383672 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 17:10:52.904015  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 17:10:52.937674  383672 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.946595768s)
	I0916 17:10:52.937715  383672 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 17:10:53.112623  383672 pod_ready.go:103] pod "etcd-addons-214113" in "kube-system" namespace has status "Ready":"False"
	I0916 17:10:53.246861  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.193174553s)
	I0916 17:10:53.246930  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:53.246945  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:53.247305  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:53.247394  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:53.247422  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:53.247439  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:53.247454  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:53.247729  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:53.247748  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:53.430455  383672 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:10:53.430482  383672 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 17:10:53.462808  383672 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-214113" context rescaled to 1 replicas
	I0916 17:10:53.871963  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:10:55.059107  383672 pod_ready.go:93] pod "etcd-addons-214113" in "kube-system" namespace has status "Ready":"True"
	I0916 17:10:55.059130  383672 pod_ready.go:82] duration metric: took 4.01146944s for pod "etcd-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:55.059138  383672 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:56.098833  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.960658252s)
	I0916 17:10:56.098886  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.098898  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.098984  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.939449443s)
	I0916 17:10:56.099040  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.925651296s)
	I0916 17:10:56.099087  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.099103  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.099151  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.099176  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.099178  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:56.099192  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.099200  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.099153  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.099252  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.099406  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:56.099443  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.099481  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.099510  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.099522  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.099522  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.099534  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.099488  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.099569  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.099579  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.099585  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.099447  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:56.099774  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:56.099462  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:56.099815  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.099824  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.099864  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:56.099884  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.099891  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.149884  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:10:56.149907  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:10:56.150154  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:10:56.150176  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:10:56.150191  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:10:57.082405  383672 pod_ready.go:103] pod "kube-apiserver-addons-214113" in "kube-system" namespace has status "Ready":"False"
	I0916 17:10:57.853078  383672 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 17:10:57.853126  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:57.856700  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:57.857212  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:57.857243  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:57.857429  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:57.857691  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:57.857882  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:57.858060  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:10:58.068281  383672 pod_ready.go:93] pod "kube-apiserver-addons-214113" in "kube-system" namespace has status "Ready":"True"
	I0916 17:10:58.068315  383672 pod_ready.go:82] duration metric: took 3.009169556s for pod "kube-apiserver-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:58.068329  383672 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:58.078746  383672 pod_ready.go:93] pod "kube-controller-manager-addons-214113" in "kube-system" namespace has status "Ready":"True"
	I0916 17:10:58.078777  383672 pod_ready.go:82] duration metric: took 10.438699ms for pod "kube-controller-manager-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:58.078789  383672 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:58.407618  383672 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 17:10:58.588963  383672 pod_ready.go:93] pod "kube-scheduler-addons-214113" in "kube-system" namespace has status "Ready":"True"
	I0916 17:10:58.588994  383672 pod_ready.go:82] duration metric: took 510.198053ms for pod "kube-scheduler-addons-214113" in "kube-system" namespace to be "Ready" ...
	I0916 17:10:58.589002  383672 pod_ready.go:39] duration metric: took 7.547976847s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:10:58.589024  383672 api_server.go:52] waiting for apiserver process to appear ...
	I0916 17:10:58.589111  383672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:10:58.592649  383672 addons.go:234] Setting addon gcp-auth=true in "addons-214113"
	I0916 17:10:58.592700  383672 host.go:66] Checking if "addons-214113" exists ...
	I0916 17:10:58.593028  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:58.593105  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:58.610739  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I0916 17:10:58.611234  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:58.611734  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:58.611758  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:58.612082  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:58.612546  383672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:10:58.612595  383672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:10:58.628005  383672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0916 17:10:58.628470  383672 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:10:58.628915  383672 main.go:141] libmachine: Using API Version  1
	I0916 17:10:58.628935  383672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:10:58.629289  383672 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:10:58.629474  383672 main.go:141] libmachine: (addons-214113) Calling .GetState
	I0916 17:10:58.630738  383672 main.go:141] libmachine: (addons-214113) Calling .DriverName
	I0916 17:10:58.630932  383672 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 17:10:58.630959  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHHostname
	I0916 17:10:58.633780  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:58.634181  383672 main.go:141] libmachine: (addons-214113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e2:c9", ip: ""} in network mk-addons-214113: {Iface:virbr1 ExpiryTime:2024-09-16 18:10:12 +0000 UTC Type:0 Mac:52:54:00:53:e2:c9 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:addons-214113 Clientid:01:52:54:00:53:e2:c9}
	I0916 17:10:58.634211  383672 main.go:141] libmachine: (addons-214113) DBG | domain addons-214113 has defined IP address 192.168.39.110 and MAC address 52:54:00:53:e2:c9 in network mk-addons-214113
	I0916 17:10:58.634353  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHPort
	I0916 17:10:58.634531  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHKeyPath
	I0916 17:10:58.634690  383672 main.go:141] libmachine: (addons-214113) Calling .GetSSHUsername
	I0916 17:10:58.634823  383672 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/addons-214113/id_rsa Username:docker}
	I0916 17:11:01.465954  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.246357368s)
	I0916 17:11:01.466024  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466041  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466062  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.120954809s)
	I0916 17:11:01.466118  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466137  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466173  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.962477442s)
	I0916 17:11:01.466210  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466229  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466281  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.927377445s)
	I0916 17:11:01.466311  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466323  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466381  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.77577348s)
	I0916 17:11:01.466402  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466403  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.466411  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466442  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.466450  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.466458  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466466  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466497  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.759036071s)
	I0916 17:11:01.466515  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466529  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466542  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.466576  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.466585  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466594  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466595  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.73173434s)
	I0916 17:11:01.466611  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466621  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466641  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.466653  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.466660  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466670  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466708  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.575947291s)
	I0916 17:11:01.466721  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466737  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466748  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.466801  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.466809  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.466817  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.466825  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.466859  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.291378927s)
	I0916 17:11:01.466880  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	W0916 17:11:01.466886  383672 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:11:01.466910  383672 retry.go:31] will retry after 137.850548ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:11:01.466968  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.466984  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.026043346s)
	I0916 17:11:01.466998  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.467002  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.467008  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.467032  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.466986  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.467178  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.468690  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.468730  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.468737  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.468745  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.468751  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.468810  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.468827  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.468847  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.468853  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.468861  383672 addons.go:475] Verifying addon registry=true in "addons-214113"
	I0916 17:11:01.469265  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.469276  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.469362  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.469372  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.469380  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.469387  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.469453  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.469461  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.469468  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.469475  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.469530  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.469565  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.469573  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.469582  383672 addons.go:475] Verifying addon ingress=true in "addons-214113"
	I0916 17:11:01.470593  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.470643  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.470681  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.470700  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.470780  383672 out.go:177] * Verifying registry addon...
	I0916 17:11:01.471462  383672 out.go:177] * Verifying ingress addon...
	I0916 17:11:01.471818  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.471861  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.472229  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.472245  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.472324  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.469786  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.472408  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.472419  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.472427  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.472330  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.472476  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.472584  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.472527  383672 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-214113 service yakd-dashboard -n yakd-dashboard
	
	I0916 17:11:01.472610  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.472717  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.472731  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.472832  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.472865  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.472905  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.472924  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:01.472968  383672 addons.go:475] Verifying addon metrics-server=true in "addons-214113"
	I0916 17:11:01.473330  383672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 17:11:01.473818  383672 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 17:11:01.508027  383672 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 17:11:01.508053  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:01.508676  383672 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 17:11:01.508696  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:01.605376  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:11:01.616158  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:01.616180  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:01.616602  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:01.616621  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:01.616635  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:02.004167  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:02.004803  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:02.368145  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.496107932s)
	I0916 17:11:02.368208  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:02.368210  383672 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.779069722s)
	I0916 17:11:02.368251  383672 api_server.go:72] duration metric: took 11.739010817s to wait for apiserver process to appear ...
	I0916 17:11:02.368266  383672 api_server.go:88] waiting for apiserver healthz status ...
	I0916 17:11:02.368297  383672 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0916 17:11:02.368226  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:02.368294  383672 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.73733895s)
	I0916 17:11:02.368580  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:02.368635  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:02.368644  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:02.368659  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:02.368669  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:02.368904  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:02.368947  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:02.368965  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:02.368978  383672 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-214113"
	I0916 17:11:02.370112  383672 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 17:11:02.371102  383672 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:11:02.371800  383672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 17:11:02.373115  383672 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 17:11:02.374023  383672 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 17:11:02.374035  383672 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 17:11:02.397902  383672 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I0916 17:11:02.400576  383672 api_server.go:141] control plane version: v1.31.1
	I0916 17:11:02.400595  383672 api_server.go:131] duration metric: took 32.319086ms to wait for apiserver health ...
	I0916 17:11:02.400603  383672 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 17:11:02.408777  383672 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 17:11:02.408796  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:02.424193  383672 system_pods.go:59] 18 kube-system pods found
	I0916 17:11:02.424225  383672 system_pods.go:61] "coredns-7c65d6cfc9-fkj5c" [2860ce52-d4bf-407b-9013-7d943b1ea44a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 17:11:02.424235  383672 system_pods.go:61] "csi-hostpath-attacher-0" [a98312ee-68b6-4e5f-b2d4-489225f4e72b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 17:11:02.424247  383672 system_pods.go:61] "csi-hostpath-resizer-0" [b6838e53-164d-4a17-9655-e68c35a5e2a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 17:11:02.424257  383672 system_pods.go:61] "csi-hostpathplugin-gbmc5" [63b8918c-c790-4b79-afbb-0146326e18bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 17:11:02.424264  383672 system_pods.go:61] "etcd-addons-214113" [af923d8f-3d0e-4b27-8829-3b91f474b2f6] Running
	I0916 17:11:02.424274  383672 system_pods.go:61] "kube-apiserver-addons-214113" [a44414df-58e1-4699-9816-e0d024650ddd] Running
	I0916 17:11:02.424281  383672 system_pods.go:61] "kube-controller-manager-addons-214113" [0d33e2a5-1d38-40b7-a11f-d6d24db13cb8] Running
	I0916 17:11:02.424290  383672 system_pods.go:61] "kube-ingress-dns-minikube" [812f7b80-8724-41b2-9627-e4f5366ac89b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 17:11:02.424297  383672 system_pods.go:61] "kube-proxy-4t24k" [39b30fb3-f280-4033-9762-c8207ab8c7dc] Running
	I0916 17:11:02.424303  383672 system_pods.go:61] "kube-scheduler-addons-214113" [d20dbb10-417a-4885-b081-a8d6573e63c9] Running
	I0916 17:11:02.424312  383672 system_pods.go:61] "metrics-server-84c5f94fbc-vbk42" [6c2237a0-5a07-4a63-95b2-765b42ce9480] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 17:11:02.424325  383672 system_pods.go:61] "nvidia-device-plugin-daemonset-w467n" [c8c1c53c-2620-4519-9766-1b19808a63f0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 17:11:02.424340  383672 system_pods.go:61] "registry-66c9cd494c-smm7x" [998a3900-52e0-4945-9a7d-442a928ba481] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 17:11:02.424351  383672 system_pods.go:61] "registry-proxy-jghxw" [0347148d-375f-49b2-a422-6401b38ca5fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 17:11:02.424363  383672 system_pods.go:61] "snapshot-controller-56fcc65765-6rh2v" [1897d90d-d874-4414-9292-4e501d1e8343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:11:02.424375  383672 system_pods.go:61] "snapshot-controller-56fcc65765-rqhzz" [6760fca5-4458-4733-aa32-9761fe7a915e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:11:02.424383  383672 system_pods.go:61] "storage-provisioner" [8da5d195-6d61-4fca-b384-89f5f8d6fe8f] Running
	I0916 17:11:02.424393  383672 system_pods.go:61] "tiller-deploy-b48cc5f79-dnc5m" [0f4e2597-a4d1-4c47-a5a2-79ef2c7607d4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 17:11:02.424403  383672 system_pods.go:74] duration metric: took 23.793175ms to wait for pod list to return data ...
	I0916 17:11:02.424416  383672 default_sa.go:34] waiting for default service account to be created ...
	I0916 17:11:02.434058  383672 default_sa.go:45] found service account: "default"
	I0916 17:11:02.434077  383672 default_sa.go:55] duration metric: took 9.652106ms for default service account to be created ...
	I0916 17:11:02.434086  383672 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 17:11:02.442704  383672 system_pods.go:86] 18 kube-system pods found
	I0916 17:11:02.442729  383672 system_pods.go:89] "coredns-7c65d6cfc9-fkj5c" [2860ce52-d4bf-407b-9013-7d943b1ea44a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 17:11:02.442737  383672 system_pods.go:89] "csi-hostpath-attacher-0" [a98312ee-68b6-4e5f-b2d4-489225f4e72b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 17:11:02.442747  383672 system_pods.go:89] "csi-hostpath-resizer-0" [b6838e53-164d-4a17-9655-e68c35a5e2a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 17:11:02.442753  383672 system_pods.go:89] "csi-hostpathplugin-gbmc5" [63b8918c-c790-4b79-afbb-0146326e18bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 17:11:02.442757  383672 system_pods.go:89] "etcd-addons-214113" [af923d8f-3d0e-4b27-8829-3b91f474b2f6] Running
	I0916 17:11:02.442763  383672 system_pods.go:89] "kube-apiserver-addons-214113" [a44414df-58e1-4699-9816-e0d024650ddd] Running
	I0916 17:11:02.442767  383672 system_pods.go:89] "kube-controller-manager-addons-214113" [0d33e2a5-1d38-40b7-a11f-d6d24db13cb8] Running
	I0916 17:11:02.442772  383672 system_pods.go:89] "kube-ingress-dns-minikube" [812f7b80-8724-41b2-9627-e4f5366ac89b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 17:11:02.442775  383672 system_pods.go:89] "kube-proxy-4t24k" [39b30fb3-f280-4033-9762-c8207ab8c7dc] Running
	I0916 17:11:02.442783  383672 system_pods.go:89] "kube-scheduler-addons-214113" [d20dbb10-417a-4885-b081-a8d6573e63c9] Running
	I0916 17:11:02.442788  383672 system_pods.go:89] "metrics-server-84c5f94fbc-vbk42" [6c2237a0-5a07-4a63-95b2-765b42ce9480] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 17:11:02.442794  383672 system_pods.go:89] "nvidia-device-plugin-daemonset-w467n" [c8c1c53c-2620-4519-9766-1b19808a63f0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 17:11:02.442800  383672 system_pods.go:89] "registry-66c9cd494c-smm7x" [998a3900-52e0-4945-9a7d-442a928ba481] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 17:11:02.442813  383672 system_pods.go:89] "registry-proxy-jghxw" [0347148d-375f-49b2-a422-6401b38ca5fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 17:11:02.442819  383672 system_pods.go:89] "snapshot-controller-56fcc65765-6rh2v" [1897d90d-d874-4414-9292-4e501d1e8343] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:11:02.442827  383672 system_pods.go:89] "snapshot-controller-56fcc65765-rqhzz" [6760fca5-4458-4733-aa32-9761fe7a915e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 17:11:02.442831  383672 system_pods.go:89] "storage-provisioner" [8da5d195-6d61-4fca-b384-89f5f8d6fe8f] Running
	I0916 17:11:02.442836  383672 system_pods.go:89] "tiller-deploy-b48cc5f79-dnc5m" [0f4e2597-a4d1-4c47-a5a2-79ef2c7607d4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 17:11:02.442843  383672 system_pods.go:126] duration metric: took 8.751001ms to wait for k8s-apps to be running ...
	I0916 17:11:02.442851  383672 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 17:11:02.442893  383672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:11:02.476781  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:02.479142  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:02.520123  383672 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 17:11:02.520147  383672 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 17:11:02.606846  383672 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:11:02.606876  383672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 17:11:02.698599  383672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:11:02.877060  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:02.978612  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:02.979278  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:03.379348  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:03.478297  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:03.479275  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:03.493914  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.888484234s)
	I0916 17:11:03.493944  383672 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.051027398s)
	I0916 17:11:03.493962  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:03.493971  383672 system_svc.go:56] duration metric: took 1.051115539s WaitForService to wait for kubelet
	I0916 17:11:03.493980  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:03.493984  383672 kubeadm.go:582] duration metric: took 12.864742747s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:11:03.494006  383672 node_conditions.go:102] verifying NodePressure condition ...
	I0916 17:11:03.494252  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:03.494269  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:03.494278  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:03.494286  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:03.494302  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:03.494522  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:03.494539  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:03.496349  383672 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 17:11:03.496379  383672 node_conditions.go:123] node cpu capacity is 2
	I0916 17:11:03.496393  383672 node_conditions.go:105] duration metric: took 2.380815ms to run NodePressure ...
	I0916 17:11:03.496408  383672 start.go:241] waiting for startup goroutines ...
	I0916 17:11:03.885978  383672 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.187333746s)
	I0916 17:11:03.886035  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:03.886051  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:03.886294  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:03.886322  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:03.886332  383672 main.go:141] libmachine: Making call to close driver server
	I0916 17:11:03.886344  383672 main.go:141] libmachine: (addons-214113) Calling .Close
	I0916 17:11:03.886567  383672 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:11:03.886583  383672 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:11:03.886603  383672 main.go:141] libmachine: (addons-214113) DBG | Closing plugin on server side
	I0916 17:11:03.886656  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:03.887532  383672 addons.go:475] Verifying addon gcp-auth=true in "addons-214113"
	I0916 17:11:03.888875  383672 out.go:177] * Verifying gcp-auth addon...
	I0916 17:11:03.890962  383672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 17:11:03.986128  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:03.986273  383672 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:11:03.986331  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:04.377227  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:04.477106  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:04.478035  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:04.875930  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:04.976631  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:04.978384  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:05.376919  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:05.476784  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:05.477974  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:05.876105  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:05.977430  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:05.978027  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:06.376176  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:06.478727  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:06.478984  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:06.877353  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:06.987578  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:06.988222  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:07.377165  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:07.477631  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:07.477851  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:07.877718  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:07.977183  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:07.979714  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:08.375837  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:08.477921  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:08.478070  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:08.876275  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:08.976999  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:08.977584  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:09.376302  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:09.476510  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:09.478527  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:09.876035  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:09.978372  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:09.978808  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:10.377186  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:10.476561  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:10.477292  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:10.877675  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:10.978301  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:10.978466  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:11.375992  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:11.476651  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:11.477092  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:11.876042  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:11.979940  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:11.980147  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:12.376659  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:12.476954  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:12.477792  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:12.876570  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:12.977909  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:12.978067  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:13.376317  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:13.476512  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:13.478166  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:13.875201  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:13.977854  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:13.978678  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:14.376507  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:14.477338  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:14.477751  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:14.878323  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:15.203910  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:15.205483  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:15.375562  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:15.475817  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:15.477647  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:15.890275  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:15.983509  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:15.984484  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:16.375894  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:16.477733  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:16.477792  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:16.878225  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:16.976687  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:16.978510  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:17.376191  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:17.477629  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:17.478258  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:17.875990  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:17.978319  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:17.978686  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:18.375871  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:18.478352  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:18.479579  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:18.876314  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:18.977360  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:18.977872  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:19.376507  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:19.476822  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:19.477751  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:19.887488  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:19.988416  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:19.989701  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:20.376335  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:20.477204  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:20.478386  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:20.875769  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:20.978566  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:20.978974  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:21.377209  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:21.478105  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:21.478197  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:21.877434  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:21.978359  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:21.978604  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:22.376499  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:22.477130  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:22.478309  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:22.876246  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:22.977767  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:22.978340  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:23.377269  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:23.478335  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:23.478430  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:23.875878  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:23.978079  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:23.978605  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:24.376421  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:24.477642  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:24.477806  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:24.876281  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:24.978663  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:24.979226  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:25.377010  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:25.476015  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:25.478113  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:25.876533  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:25.978714  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:25.978740  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:26.377126  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:26.477380  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:26.477598  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:27.156710  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:27.156957  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:27.156964  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:27.376165  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:27.477627  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:27.477715  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:27.877298  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:27.978716  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:27.978967  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:28.376946  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:28.477058  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:28.477192  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:28.876153  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:28.977306  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:28.977635  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:29.377597  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:29.478578  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:29.479230  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:29.876425  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:29.978088  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:29.978411  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:30.375925  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:30.477770  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:30.477915  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:30.875935  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:30.978150  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:30.978301  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:31.375903  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:31.476674  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:31.477003  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:31.876762  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:31.978124  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:31.978538  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:32.376084  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:32.476541  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:32.478152  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:32.875760  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:32.976906  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:32.977302  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:33.571668  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:33.571789  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:33.573459  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:33.881972  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:34.189022  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:34.189364  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:34.377298  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:34.477200  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:34.477635  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:34.881633  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:34.981875  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:11:34.982710  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:35.376159  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:35.477496  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:35.477921  383672 kapi.go:107] duration metric: took 34.004590035s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 17:11:35.877913  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:36.073134  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:36.375378  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:36.478462  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:36.876415  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:36.980042  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:37.381178  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:37.477092  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:37.877354  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:37.976993  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:38.380379  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:38.499085  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:38.876220  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:38.977365  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:39.376338  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:39.478197  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:39.876564  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:39.978181  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:40.417579  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:40.518662  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:40.875742  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:40.978225  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:41.378331  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:41.477629  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:41.876581  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:41.976973  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:42.377741  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:42.477920  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:42.876135  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:42.976865  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:43.377493  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:43.477988  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:43.877148  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:43.977678  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:44.375537  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:44.478381  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:44.877401  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:44.977280  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:45.376506  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:45.477550  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:45.875991  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:45.977243  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:46.376446  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:46.477962  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:46.876536  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:46.976917  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:47.534979  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:47.535758  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:47.914761  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:48.013477  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:48.376589  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:48.477290  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:48.876093  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:48.977052  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:49.376598  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:49.476997  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:49.876751  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:49.978999  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:50.376488  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:50.476906  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:50.876576  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:50.977634  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:51.376432  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:51.478125  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:51.876079  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:51.977758  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:52.376441  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:52.477168  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:52.877324  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:52.979277  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:53.376336  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:53.477587  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:53.875762  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:53.978047  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:54.376671  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:54.478286  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:54.876264  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:54.977988  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:55.377011  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:55.477545  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:56.036583  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:56.037210  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:56.377434  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:56.477739  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:56.875266  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:56.976930  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:57.376692  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:57.477832  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:57.877285  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:57.978416  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:58.375722  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:58.476739  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:58.876711  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:58.983731  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:59.376574  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:59.478611  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:11:59.876206  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:11:59.995772  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:00.377027  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:00.478733  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:00.876212  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:00.977498  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:01.376090  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:01.479627  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:01.876625  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:01.978502  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:02.376648  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:02.478294  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:02.876446  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:02.977292  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:03.377545  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:03.485865  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:03.878953  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:03.978165  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:04.376523  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:04.477517  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:04.876004  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:04.979313  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:05.376609  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:05.477580  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:05.876040  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:05.978014  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:06.377104  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:06.478448  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:06.879445  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:06.980220  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:07.378331  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:07.479687  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:07.881390  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:07.979189  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:08.382269  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:08.481679  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:08.875766  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:08.977541  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:09.381882  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:09.477542  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:09.875739  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:09.978613  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:10.376397  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:10.477328  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:10.875745  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:10.976990  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:11.376173  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:11.477714  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:11.876060  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:11.977907  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:12.377169  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:12.478304  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:12.876452  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:12.977821  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:13.376079  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:13.477711  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:13.876883  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:13.977521  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:14.551265  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:14.551350  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:14.876860  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:14.977491  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:15.375816  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:15.477719  383672 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:12:15.877165  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:15.977856  383672 kapi.go:107] duration metric: took 1m14.504033823s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 17:12:16.376891  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:16.880301  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:17.377446  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:17.876582  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:18.376762  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:18.875823  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:19.377895  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:19.876707  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:20.376909  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:20.877669  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:21.378392  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:21.876485  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:22.378714  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:22.876401  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:12:23.376538  383672 kapi.go:107] duration metric: took 1m21.004733311s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 17:12:26.894117  383672 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:12:26.894145  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:27.394404  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:27.894858  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:28.396022  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:28.894060  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:29.394077  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:29.893953  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:30.394464  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:30.894706  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:31.394767  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:31.894655  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:32.394578  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:32.894373  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:33.395237  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:33.894253  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:34.394022  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:34.893853  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:35.394068  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:35.893930  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:36.393993  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:36.894643  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:37.395021  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:37.894045  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:38.395271  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:38.894677  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:39.396245  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:39.894863  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:40.395124  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:40.893687  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:41.395181  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:41.893771  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:42.394625  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:42.894594  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:43.393748  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:43.894810  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:44.393617  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:44.895018  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:45.394933  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:45.894910  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:46.394922  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:46.893891  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:47.394078  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:47.893990  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:48.394964  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:48.893698  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:49.394497  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:49.894507  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:50.395225  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:50.894342  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:51.394463  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:51.894286  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:52.395025  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:52.893711  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:53.394177  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:53.894592  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:54.394501  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:54.895275  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:55.394941  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:55.896239  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:56.394541  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:56.894999  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:57.393902  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:57.894163  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:58.393807  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:58.894569  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:59.394867  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:12:59.894428  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:00.395321  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:00.894766  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:01.395356  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:01.894562  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:02.394518  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:02.894679  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:03.393909  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:03.894267  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:04.394187  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:04.894103  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:05.394475  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:05.894552  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:06.398783  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:06.894754  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:07.394938  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:07.894502  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:08.394498  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:08.894415  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:09.394578  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:09.894608  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:10.395091  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:10.895572  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:11.394738  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:11.895104  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:12.394408  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:12.894689  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:13.394488  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:13.894918  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:14.394047  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:14.893880  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:15.394937  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:15.894525  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:16.394565  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:16.895025  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:17.394338  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:17.894342  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:18.394172  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:18.894211  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:19.394854  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:19.893900  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:20.394155  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:20.894027  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:21.394348  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:21.893934  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:22.393789  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:22.894678  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:23.394208  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:23.894069  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:24.393825  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:24.893500  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:25.394882  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:25.895885  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:26.393657  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:26.894654  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:27.394946  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:27.894026  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:28.394697  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:28.894843  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:29.395108  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:29.893897  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:30.394407  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:30.894317  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:31.394604  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:31.894738  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:32.394838  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:32.893799  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:33.394792  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:33.895015  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:34.394293  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:34.894132  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:35.398874  383672 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:13:35.894627  383672 kapi.go:107] duration metric: took 2m32.00366278s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 17:13:35.896042  383672 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-214113 cluster.
	I0916 17:13:35.897104  383672 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 17:13:35.898041  383672 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 17:13:35.899052  383672 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, volcano, helm-tiller, ingress-dns, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 17:13:35.900176  383672 addons.go:510] duration metric: took 2m45.270860754s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin storage-provisioner-rancher volcano helm-tiller ingress-dns inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 17:13:35.900215  383672 start.go:246] waiting for cluster config update ...
	I0916 17:13:35.900238  383672 start.go:255] writing updated cluster config ...
	I0916 17:13:35.900535  383672 ssh_runner.go:195] Run: rm -f paused
	I0916 17:13:35.953344  383672 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 17:13:35.954618  383672 out.go:177] * Done! kubectl is now configured to use "addons-214113" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 16 17:23:27 addons-214113 dockerd[1194]: time="2024-09-16T17:23:27.485631106Z" level=info msg="ignoring event" container=58269053eb8132fdc13b0db6c1db52ee11e3e2da38f63f2cad9c728572ba1266 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:23:27 addons-214113 dockerd[1200]: time="2024-09-16T17:23:27.486944358Z" level=info msg="shim disconnected" id=58269053eb8132fdc13b0db6c1db52ee11e3e2da38f63f2cad9c728572ba1266 namespace=moby
	Sep 16 17:23:27 addons-214113 dockerd[1200]: time="2024-09-16T17:23:27.486994673Z" level=warning msg="cleaning up after shim disconnected" id=58269053eb8132fdc13b0db6c1db52ee11e3e2da38f63f2cad9c728572ba1266 namespace=moby
	Sep 16 17:23:27 addons-214113 dockerd[1200]: time="2024-09-16T17:23:27.487003896Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1194]: time="2024-09-16T17:23:31.201452605Z" level=info msg="ignoring event" container=a786ef1a15990cbd8f647079dd8dee5aff912c7d8d5260694010f925eba49f92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.202439829Z" level=info msg="shim disconnected" id=a786ef1a15990cbd8f647079dd8dee5aff912c7d8d5260694010f925eba49f92 namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.202888385Z" level=warning msg="cleaning up after shim disconnected" id=a786ef1a15990cbd8f647079dd8dee5aff912c7d8d5260694010f925eba49f92 namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.203264707Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.716520247Z" level=info msg="shim disconnected" id=4c6475a305cbaaa33f446f57e8da83a90920a4b41e362139ac23b64c4af4a7cc namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.716660154Z" level=warning msg="cleaning up after shim disconnected" id=4c6475a305cbaaa33f446f57e8da83a90920a4b41e362139ac23b64c4af4a7cc namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.716702177Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1194]: time="2024-09-16T17:23:31.717095219Z" level=info msg="ignoring event" container=4c6475a305cbaaa33f446f57e8da83a90920a4b41e362139ac23b64c4af4a7cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:23:31 addons-214113 dockerd[1194]: time="2024-09-16T17:23:31.729936646Z" level=info msg="ignoring event" container=1b4a8afedae5c13743b6666f8633fdb45af10592ff444bdeea9ef0c6e3874c84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.730186171Z" level=info msg="shim disconnected" id=1b4a8afedae5c13743b6666f8633fdb45af10592ff444bdeea9ef0c6e3874c84 namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.730445109Z" level=warning msg="cleaning up after shim disconnected" id=1b4a8afedae5c13743b6666f8633fdb45af10592ff444bdeea9ef0c6e3874c84 namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.730576351Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1194]: time="2024-09-16T17:23:31.888165138Z" level=info msg="ignoring event" container=c4124b3bd73948b180aaec316b1555d5e3641af00b357ec1930a6ee79ef9c475 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.889257261Z" level=info msg="shim disconnected" id=c4124b3bd73948b180aaec316b1555d5e3641af00b357ec1930a6ee79ef9c475 namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.889313592Z" level=warning msg="cleaning up after shim disconnected" id=c4124b3bd73948b180aaec316b1555d5e3641af00b357ec1930a6ee79ef9c475 namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.889323222Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1194]: time="2024-09-16T17:23:31.990145884Z" level=info msg="ignoring event" container=c20d667045dda9a3995c7236fb59045f421ef30b3f861dc485ebe8c38ab0e05c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.990765931Z" level=info msg="shim disconnected" id=c20d667045dda9a3995c7236fb59045f421ef30b3f861dc485ebe8c38ab0e05c namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.991102374Z" level=warning msg="cleaning up after shim disconnected" id=c20d667045dda9a3995c7236fb59045f421ef30b3f861dc485ebe8c38ab0e05c namespace=moby
	Sep 16 17:23:31 addons-214113 dockerd[1200]: time="2024-09-16T17:23:31.992471420Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 16 17:23:32 addons-214113 cri-dockerd[1093]: time="2024-09-16T17:23:32Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-jghxw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c20d667045dda9a3995c7236fb59045f421ef30b3f861dc485ebe8c38ab0e05c\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cfa97d5d634f9       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  8 seconds ago       Running             hello-world-app           0                   6bcf5c520f40a       hello-world-app-55bf9c44b4-j9nqj
	53ce22999f46b       ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c                        12 seconds ago      Running             headlamp                  0                   7ef98084b4000       headlamp-7b5c95b59d-c2gvx
	57f985db11d9f       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                17 seconds ago      Running             nginx                     0                   0e57dddc7b135       nginx
	eebe612a7aee4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   51dd7a1c019a7       gcp-auth-89d5ffd79-9drqr
	a5f2adb01d31a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   24d21f56fafb0       ingress-nginx-admission-patch-2fl8c
	4a31021606eb1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   ecd937ed48b8c       ingress-nginx-admission-create-ps4tx
	2dfab48d5236d       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   03377427322a6       storage-provisioner
	f749e159c5a40       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   05b46c27cc4ae       coredns-7c65d6cfc9-fkj5c
	f32af0845eb76       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   66474a76a3ed5       kube-proxy-4t24k
	407a123775062       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   674a91a645be0       etcd-addons-214113
	e138d901a576e       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   34d6316c37cbb       kube-scheduler-addons-214113
	c2dde59740b04       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   7378eac683e92       kube-apiserver-addons-214113
	b0257b39ad861       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   707c4a1e14259       kube-controller-manager-addons-214113
	
	
	==> coredns [f749e159c5a4] <==
	[INFO] 10.244.0.21:46566 - 20738 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000168823s
	[INFO] 10.244.0.21:51741 - 30840 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00025632s
	[INFO] 10.244.0.21:46566 - 17740 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000111016s
	[INFO] 10.244.0.21:46566 - 29264 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000179675s
	[INFO] 10.244.0.21:46566 - 44998 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097793s
	[INFO] 10.244.0.21:51741 - 29542 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000755501s
	[INFO] 10.244.0.21:51741 - 10251 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000159583s
	[INFO] 10.244.0.21:33156 - 43774 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000135587s
	[INFO] 10.244.0.21:46566 - 21653 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000171898s
	[INFO] 10.244.0.21:33156 - 20838 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047781s
	[INFO] 10.244.0.21:51741 - 8205 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000183348s
	[INFO] 10.244.0.21:33156 - 57123 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035031s
	[INFO] 10.244.0.21:46566 - 30435 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000736429s
	[INFO] 10.244.0.21:35665 - 33673 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000452196s
	[INFO] 10.244.0.21:51741 - 28428 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000168024s
	[INFO] 10.244.0.21:35665 - 41665 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048707s
	[INFO] 10.244.0.21:33156 - 53885 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000117318s
	[INFO] 10.244.0.21:51741 - 51714 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078402s
	[INFO] 10.244.0.21:33156 - 53787 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000227862s
	[INFO] 10.244.0.21:35665 - 35126 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052474s
	[INFO] 10.244.0.21:33156 - 41158 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040052s
	[INFO] 10.244.0.21:35665 - 41419 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063139s
	[INFO] 10.244.0.21:35665 - 3467 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101022s
	[INFO] 10.244.0.21:35665 - 61633 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054247s
	[INFO] 10.244.0.21:35665 - 16923 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064861s
	
	
	==> describe nodes <==
	Name:               addons-214113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-214113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=addons-214113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T17_10_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-214113
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 17:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-214113
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 17:23:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 17:23:20 +0000   Mon, 16 Sep 2024 17:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 17:23:20 +0000   Mon, 16 Sep 2024 17:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 17:23:20 +0000   Mon, 16 Sep 2024 17:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 17:23:20 +0000   Mon, 16 Sep 2024 17:10:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    addons-214113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 aff68f4b1d2846e9a9e29ee6a1cd20af
	  System UUID:                aff68f4b-1d28-46e9-a9e2-9ee6a1cd20af
	  Boot ID:                    26370bda-ac68-4599-a458-0b0df5e70d0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-j9nqj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  gcp-auth                    gcp-auth-89d5ffd79-9drqr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-7b5c95b59d-c2gvx                0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 coredns-7c65d6cfc9-fkj5c                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-214113                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-214113             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-214113    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4t24k                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-214113             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node addons-214113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node addons-214113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet          Node addons-214113 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-214113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-214113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-214113 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-214113 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-214113 event: Registered Node addons-214113 in Controller
	
	
	==> dmesg <==
	[  +8.159516] kauditd_printk_skb: 20 callbacks suppressed
	[Sep16 17:12] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.000828] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.706495] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.110740] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.213081] kauditd_printk_skb: 32 callbacks suppressed
	[Sep16 17:13] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.791399] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.886347] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.892318] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 17:14] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.591666] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.315986] kauditd_printk_skb: 2 callbacks suppressed
	[Sep16 17:17] kauditd_printk_skb: 28 callbacks suppressed
	[Sep16 17:22] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.841886] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.064061] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.010987] kauditd_printk_skb: 48 callbacks suppressed
	[  +6.077436] kauditd_printk_skb: 23 callbacks suppressed
	[ +10.620969] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.436994] kauditd_printk_skb: 36 callbacks suppressed
	[Sep16 17:23] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.240259] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.194192] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.610967] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [407a12377506] <==
	{"level":"info","ts":"2024-09-16T17:11:56.003111Z","caller":"traceutil/trace.go:171","msg":"trace[46120206] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"156.77623ms","start":"2024-09-16T17:11:55.846325Z","end":"2024-09-16T17:11:56.003101Z","steps":["trace[46120206] 'range keys from in-memory index tree'  (duration: 156.340825ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:11:56.003463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.04633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:11:56.003721Z","caller":"traceutil/trace.go:171","msg":"trace[1499264559] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"138.3172ms","start":"2024-09-16T17:11:55.865383Z","end":"2024-09-16T17:11:56.003700Z","steps":["trace[1499264559] 'range keys from in-memory index tree'  (duration: 138.000351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:12:01.824256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.232593ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:12:01.824319Z","caller":"traceutil/trace.go:171","msg":"trace[759000759] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1182; }","duration":"106.303148ms","start":"2024-09-16T17:12:01.718002Z","end":"2024-09-16T17:12:01.824305Z","steps":["trace[759000759] 'range keys from in-memory index tree'  (duration: 106.19941ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:12:12.322221Z","caller":"traceutil/trace.go:171","msg":"trace[1599838040] transaction","detail":"{read_only:false; response_revision:1232; number_of_response:1; }","duration":"171.357013ms","start":"2024-09-16T17:12:12.150848Z","end":"2024-09-16T17:12:12.322205Z","steps":["trace[1599838040] 'process raft request'  (duration: 171.265016ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:12:14.516068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.242515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T17:12:14.516128Z","caller":"traceutil/trace.go:171","msg":"trace[361043922] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1234; }","duration":"190.313825ms","start":"2024-09-16T17:12:14.325800Z","end":"2024-09-16T17:12:14.516114Z","steps":["trace[361043922] 'range keys from in-memory index tree'  (duration: 190.151ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:12:14.516320Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.987632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:12:14.516365Z","caller":"traceutil/trace.go:171","msg":"trace[9489659] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1234; }","duration":"171.041145ms","start":"2024-09-16T17:12:14.345316Z","end":"2024-09-16T17:12:14.516357Z","steps":["trace[9489659] 'range keys from in-memory index tree'  (duration: 170.906686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:12:14.516625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.143909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:12:14.516644Z","caller":"traceutil/trace.go:171","msg":"trace[700330082] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1234; }","duration":"151.167758ms","start":"2024-09-16T17:12:14.365471Z","end":"2024-09-16T17:12:14.516638Z","steps":["trace[700330082] 'range keys from in-memory index tree'  (duration: 151.004961ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:12:20.661710Z","caller":"traceutil/trace.go:171","msg":"trace[2048144548] transaction","detail":"{read_only:false; response_revision:1288; number_of_response:1; }","duration":"102.799995ms","start":"2024-09-16T17:12:20.558895Z","end":"2024-09-16T17:12:20.661695Z","steps":["trace[2048144548] 'process raft request'  (duration: 102.661131ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:13:57.285699Z","caller":"traceutil/trace.go:171","msg":"trace[669061944] transaction","detail":"{read_only:false; response_revision:1567; number_of_response:1; }","duration":"231.09365ms","start":"2024-09-16T17:13:57.054577Z","end":"2024-09-16T17:13:57.285670Z","steps":["trace[669061944] 'process raft request'  (duration: 230.890556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T17:14:00.566878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.99022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:14:00.566955Z","caller":"traceutil/trace.go:171","msg":"trace[1861434089] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1574; }","duration":"267.129685ms","start":"2024-09-16T17:14:00.299815Z","end":"2024-09-16T17:14:00.566945Z","steps":["trace[1861434089] 'range keys from in-memory index tree'  (duration: 266.766774ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:20:42.146006Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1884}
	{"level":"info","ts":"2024-09-16T17:20:42.236456Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1884,"took":"89.042154ms","hash":2242136239,"current-db-size-bytes":9302016,"current-db-size":"9.3 MB","current-db-size-in-use-bytes":4968448,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-16T17:20:42.236774Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2242136239,"revision":1884,"compact-revision":-1}
	{"level":"warn","ts":"2024-09-16T17:23:20.479246Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.950153ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932447723441707066 > lease_revoke:<id:259891fbd0a955d5>","response":"size:29"}
	{"level":"warn","ts":"2024-09-16T17:23:20.479841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.684913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-17170fae-194c-46e0-85da-9bafa109dae7\" ","response":"range_response_count:1 size:1262"}
	{"level":"info","ts":"2024-09-16T17:23:20.479892Z","caller":"traceutil/trace.go:171","msg":"trace[467143452] range","detail":"{range_begin:/registry/persistentvolumes/pvc-17170fae-194c-46e0-85da-9bafa109dae7; range_end:; response_count:1; response_revision:2983; }","duration":"265.772163ms","start":"2024-09-16T17:23:20.214100Z","end":"2024-09-16T17:23:20.479872Z","steps":["trace[467143452] 'agreement among raft nodes before linearized reading'  (duration: 265.616341ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T17:23:20.479639Z","caller":"traceutil/trace.go:171","msg":"trace[1798739890] linearizableReadLoop","detail":"{readStateIndex:3177; appliedIndex:3176; }","duration":"265.274869ms","start":"2024-09-16T17:23:20.214103Z","end":"2024-09-16T17:23:20.479378Z","steps":["trace[1798739890] 'read index received'  (duration: 102.260592ms)","trace[1798739890] 'applied index is now lower than readState.Index'  (duration: 163.013223ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T17:23:20.480197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.975256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/snapshot-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T17:23:20.480257Z","caller":"traceutil/trace.go:171","msg":"trace[1455563159] range","detail":"{range_begin:/registry/deployments/kube-system/snapshot-controller; range_end:; response_count:0; response_revision:2983; }","duration":"212.033232ms","start":"2024-09-16T17:23:20.268210Z","end":"2024-09-16T17:23:20.480243Z","steps":["trace[1455563159] 'agreement among raft nodes before linearized reading'  (duration: 211.942324ms)"],"step_count":1}
	
	
	==> gcp-auth [eebe612a7aee] <==
	2024/09/16 17:14:17 Ready to write response ...
	2024/09/16 17:22:20 Ready to marshal response ...
	2024/09/16 17:22:20 Ready to write response ...
	2024/09/16 17:22:20 Ready to marshal response ...
	2024/09/16 17:22:20 Ready to write response ...
	2024/09/16 17:22:31 Ready to marshal response ...
	2024/09/16 17:22:31 Ready to write response ...
	2024/09/16 17:22:31 Ready to marshal response ...
	2024/09/16 17:22:31 Ready to write response ...
	2024/09/16 17:22:31 Ready to marshal response ...
	2024/09/16 17:22:31 Ready to write response ...
	2024/09/16 17:22:36 Ready to marshal response ...
	2024/09/16 17:22:36 Ready to write response ...
	2024/09/16 17:23:04 Ready to marshal response ...
	2024/09/16 17:23:04 Ready to write response ...
	2024/09/16 17:23:10 Ready to marshal response ...
	2024/09/16 17:23:10 Ready to write response ...
	2024/09/16 17:23:15 Ready to marshal response ...
	2024/09/16 17:23:15 Ready to write response ...
	2024/09/16 17:23:16 Ready to marshal response ...
	2024/09/16 17:23:16 Ready to write response ...
	2024/09/16 17:23:16 Ready to marshal response ...
	2024/09/16 17:23:16 Ready to write response ...
	2024/09/16 17:23:22 Ready to marshal response ...
	2024/09/16 17:23:22 Ready to write response ...
	
	
	==> kernel <==
	 17:23:32 up 13 min,  0 users,  load average: 2.48, 1.19, 0.76
	Linux addons-214113 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2dde59740b0] <==
	I0916 17:22:45.243439       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0916 17:22:47.644153       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0916 17:22:59.597224       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 17:23:00.730467       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0916 17:23:10.587301       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 17:23:10.762409       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.46.1"}
	I0916 17:23:15.993799       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.207.189"}
	I0916 17:23:19.930892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:23:19.930952       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:23:19.954170       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:23:19.954204       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:23:19.995149       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:23:19.999249       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:23:20.025262       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:23:20.025951       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0916 17:23:20.052966       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0916 17:23:20.052989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0916 17:23:21.025374       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0916 17:23:21.054215       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0916 17:23:21.143894       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0916 17:23:22.224339       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.141.211"}
	E0916 17:23:23.808118       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0916 17:23:24.365229       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0916 17:23:24.373201       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0916 17:23:24.386080       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [b0257b39ad86] <==
	I0916 17:23:24.294826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.74µs"
	I0916 17:23:24.299401       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0916 17:23:24.329637       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:24.329682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:24.407554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:24.407600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:24.429373       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:24.429983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:24.637546       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:24.637594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:23:24.813619       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.011487ms"
	I0916 17:23:24.813979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="79.366µs"
	W0916 17:23:27.610442       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:27.610766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:27.688339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:27.688392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:29.576887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:29.577034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:30.330942       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:30.330985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 17:23:31.197766       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:31.197816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 17:23:31.620385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="11.518µs"
	W0916 17:23:32.398309       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 17:23:32.398351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f32af0845eb7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 17:10:55.085381       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 17:10:55.101762       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0916 17:10:55.101844       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 17:10:55.198960       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 17:10:55.199011       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 17:10:55.199032       1 server_linux.go:169] "Using iptables Proxier"
	I0916 17:10:55.212070       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 17:10:55.212353       1 server.go:483] "Version info" version="v1.31.1"
	I0916 17:10:55.212364       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 17:10:55.218216       1 config.go:199] "Starting service config controller"
	I0916 17:10:55.218242       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 17:10:55.218271       1 config.go:105] "Starting endpoint slice config controller"
	I0916 17:10:55.218276       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 17:10:55.218791       1 config.go:328] "Starting node config controller"
	I0916 17:10:55.218800       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 17:10:55.320393       1 shared_informer.go:320] Caches are synced for node config
	I0916 17:10:55.320514       1 shared_informer.go:320] Caches are synced for service config
	I0916 17:10:55.320535       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e138d901a576] <==
	W0916 17:10:43.524524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 17:10:43.524582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:43.524699       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 17:10:43.524725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:43.524978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 17:10:43.525018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:43.525157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 17:10:43.525188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.444065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 17:10:44.444118       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.494623       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 17:10:44.494664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.505971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 17:10:44.506077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.604712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 17:10:44.604912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.665289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 17:10:44.665329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.693791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 17:10:44.693830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.703964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 17:10:44.704150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 17:10:44.709888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 17:10:44.710035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 17:10:45.005045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.678596    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae-kube-api-access-tmnbq" (OuterVolumeSpecName: "kube-api-access-tmnbq") pod "0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae" (UID: "0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae"). InnerVolumeSpecName "kube-api-access-tmnbq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.678886    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae" (UID: "0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.741077    1968 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-smm7x" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.752943    1968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae" path="/var/lib/kubelet/pods/0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae/volumes"
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.776181    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tmnbq\" (UniqueName: \"kubernetes.io/projected/0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae-kube-api-access-tmnbq\") on node \"addons-214113\" DevicePath \"\""
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.776218    1968 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c1d9fa1-f3e4-4ef2-84eb-d667f1d6aaae-webhook-cert\") on node \"addons-214113\" DevicePath \"\""
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.846571    1968 scope.go:117] "RemoveContainer" containerID="ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0"
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.864880    1968 scope.go:117] "RemoveContainer" containerID="ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0"
	Sep 16 17:23:27 addons-214113 kubelet[1968]: E0916 17:23:27.865622    1968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0" containerID="ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0"
	Sep 16 17:23:27 addons-214113 kubelet[1968]: I0916 17:23:27.865659    1968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0"} err="failed to get container status \"ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0\": rpc error: code = Unknown desc = Error response from daemon: No such container: ad615b08142977d56c2ad83bb42d0bcfcd5f2a3569ae74b5d3aca37b4beebcc0"
	Sep 16 17:23:29 addons-214113 kubelet[1968]: E0916 17:23:29.741930    1968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="eb496021-8bcc-4a06-9666-08f7d720fc9e"
	Sep 16 17:23:31 addons-214113 kubelet[1968]: I0916 17:23:31.401899    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsjrr\" (UniqueName: \"kubernetes.io/projected/eb496021-8bcc-4a06-9666-08f7d720fc9e-kube-api-access-qsjrr\") pod \"eb496021-8bcc-4a06-9666-08f7d720fc9e\" (UID: \"eb496021-8bcc-4a06-9666-08f7d720fc9e\") "
	Sep 16 17:23:31 addons-214113 kubelet[1968]: I0916 17:23:31.402215    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/eb496021-8bcc-4a06-9666-08f7d720fc9e-gcp-creds\") pod \"eb496021-8bcc-4a06-9666-08f7d720fc9e\" (UID: \"eb496021-8bcc-4a06-9666-08f7d720fc9e\") "
	Sep 16 17:23:31 addons-214113 kubelet[1968]: I0916 17:23:31.402352    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb496021-8bcc-4a06-9666-08f7d720fc9e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "eb496021-8bcc-4a06-9666-08f7d720fc9e" (UID: "eb496021-8bcc-4a06-9666-08f7d720fc9e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 17:23:31 addons-214113 kubelet[1968]: I0916 17:23:31.408212    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb496021-8bcc-4a06-9666-08f7d720fc9e-kube-api-access-qsjrr" (OuterVolumeSpecName: "kube-api-access-qsjrr") pod "eb496021-8bcc-4a06-9666-08f7d720fc9e" (UID: "eb496021-8bcc-4a06-9666-08f7d720fc9e"). InnerVolumeSpecName "kube-api-access-qsjrr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:23:31 addons-214113 kubelet[1968]: I0916 17:23:31.502647    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qsjrr\" (UniqueName: \"kubernetes.io/projected/eb496021-8bcc-4a06-9666-08f7d720fc9e-kube-api-access-qsjrr\") on node \"addons-214113\" DevicePath \"\""
	Sep 16 17:23:31 addons-214113 kubelet[1968]: I0916 17:23:31.502708    1968 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/eb496021-8bcc-4a06-9666-08f7d720fc9e-gcp-creds\") on node \"addons-214113\" DevicePath \"\""
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.048742    1968 scope.go:117] "RemoveContainer" containerID="1b4a8afedae5c13743b6666f8633fdb45af10592ff444bdeea9ef0c6e3874c84"
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.092452    1968 scope.go:117] "RemoveContainer" containerID="4c6475a305cbaaa33f446f57e8da83a90920a4b41e362139ac23b64c4af4a7cc"
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.107337    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhw6n\" (UniqueName: \"kubernetes.io/projected/998a3900-52e0-4945-9a7d-442a928ba481-kube-api-access-hhw6n\") pod \"998a3900-52e0-4945-9a7d-442a928ba481\" (UID: \"998a3900-52e0-4945-9a7d-442a928ba481\") "
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.111417    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998a3900-52e0-4945-9a7d-442a928ba481-kube-api-access-hhw6n" (OuterVolumeSpecName: "kube-api-access-hhw6n") pod "998a3900-52e0-4945-9a7d-442a928ba481" (UID: "998a3900-52e0-4945-9a7d-442a928ba481"). InnerVolumeSpecName "kube-api-access-hhw6n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.208460    1968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrktf\" (UniqueName: \"kubernetes.io/projected/0347148d-375f-49b2-a422-6401b38ca5fe-kube-api-access-mrktf\") pod \"0347148d-375f-49b2-a422-6401b38ca5fe\" (UID: \"0347148d-375f-49b2-a422-6401b38ca5fe\") "
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.208572    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hhw6n\" (UniqueName: \"kubernetes.io/projected/998a3900-52e0-4945-9a7d-442a928ba481-kube-api-access-hhw6n\") on node \"addons-214113\" DevicePath \"\""
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.210161    1968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0347148d-375f-49b2-a422-6401b38ca5fe-kube-api-access-mrktf" (OuterVolumeSpecName: "kube-api-access-mrktf") pod "0347148d-375f-49b2-a422-6401b38ca5fe" (UID: "0347148d-375f-49b2-a422-6401b38ca5fe"). InnerVolumeSpecName "kube-api-access-mrktf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 17:23:32 addons-214113 kubelet[1968]: I0916 17:23:32.308948    1968 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mrktf\" (UniqueName: \"kubernetes.io/projected/0347148d-375f-49b2-a422-6401b38ca5fe-kube-api-access-mrktf\") on node \"addons-214113\" DevicePath \"\""
	
	
	==> storage-provisioner [2dfab48d5236] <==
	I0916 17:10:58.589141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 17:10:58.613200       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 17:10:58.613248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 17:10:58.631777       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 17:10:58.632862       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-214113_041fb864-f0ab-456c-818e-0a336c3a3d8e!
	I0916 17:10:58.636261       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae1bb922-9d0c-42b0-8b15-003e67124f3d", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-214113_041fb864-f0ab-456c-818e-0a336c3a3d8e became leader
	I0916 17:10:58.733686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-214113_041fb864-f0ab-456c-818e-0a336c3a3d8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-214113 -n addons-214113
helpers_test.go:261: (dbg) Run:  kubectl --context addons-214113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-214113 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-214113 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-214113/192.168.39.110
	Start Time:       Mon, 16 Sep 2024 17:14:17 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-drqql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-drqql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-214113
	  Normal   Pulling    7m54s (x4 over 9m15s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.47s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 10.91
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.58
22 TestOffline 97.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 217.84
29 TestAddons/serial/Volcano 41.67
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.16
35 TestAddons/parallel/InspektorGadget 11.71
36 TestAddons/parallel/MetricsServer 5.62
37 TestAddons/parallel/HelmTiller 11.74
39 TestAddons/parallel/CSI 60.83
40 TestAddons/parallel/Headlamp 11.95
41 TestAddons/parallel/CloudSpanner 5.42
42 TestAddons/parallel/LocalPath 55.4
43 TestAddons/parallel/NvidiaDevicePlugin 5.39
44 TestAddons/parallel/Yakd 10.58
45 TestAddons/StoppedEnableDisable 13.54
46 TestCertOptions 64.85
47 TestCertExpiration 314.38
48 TestDockerFlags 65.14
49 TestForceSystemdFlag 93.96
50 TestForceSystemdEnv 54.51
52 TestKVMDriverInstallOrUpdate 4.69
56 TestErrorSpam/setup 46.03
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.14
60 TestErrorSpam/unpause 1.27
61 TestErrorSpam/stop 14.92
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 83.4
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.38
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
73 TestFunctional/serial/CacheCmd/cache/add_local 1.33
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.14
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 40.41
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.85
84 TestFunctional/serial/LogsFileCmd 0.91
85 TestFunctional/serial/InvalidService 4.48
87 TestFunctional/parallel/ConfigCmd 0.29
88 TestFunctional/parallel/DashboardCmd 14.51
89 TestFunctional/parallel/DryRun 0.24
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.76
95 TestFunctional/parallel/ServiceCmdConnect 23.65
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 47.06
99 TestFunctional/parallel/SSHCmd 0.41
100 TestFunctional/parallel/CpCmd 1.24
101 TestFunctional/parallel/MySQL 27.47
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.24
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
111 TestFunctional/parallel/License 0.59
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.58
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
119 TestFunctional/parallel/ImageCommands/Setup 1.75
120 TestFunctional/parallel/DockerEnv/bash 0.76
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.6
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.14
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
140 TestFunctional/parallel/ServiceCmd/DeployApp 26.16
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
142 TestFunctional/parallel/ProfileCmd/profile_list 0.31
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
144 TestFunctional/parallel/MountCmd/any-port 7.6
145 TestFunctional/parallel/ServiceCmd/List 1.38
146 TestFunctional/parallel/MountCmd/specific-port 1.84
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
150 TestFunctional/parallel/ServiceCmd/Format 0.37
151 TestFunctional/parallel/ServiceCmd/URL 0.29
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.01
155 TestGvisorAddon 211.09
158 TestMultiControlPlane/serial/StartCluster 215.93
159 TestMultiControlPlane/serial/DeployApp 5.52
160 TestMultiControlPlane/serial/PingHostFromPods 1.18
161 TestMultiControlPlane/serial/AddWorkerNode 59.3
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
164 TestMultiControlPlane/serial/CopyFile 12.08
165 TestMultiControlPlane/serial/StopSecondaryNode 13.18
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.37
167 TestMultiControlPlane/serial/RestartSecondaryNode 42.48
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.5
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 227.62
170 TestMultiControlPlane/serial/DeleteSecondaryNode 6.88
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/StopCluster 38.08
173 TestMultiControlPlane/serial/RestartCluster 156.45
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
175 TestMultiControlPlane/serial/AddSecondaryNode 82.63
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestImageBuild/serial/Setup 46.02
180 TestImageBuild/serial/NormalBuild 2.75
181 TestImageBuild/serial/BuildWithBuildArg 1.19
182 TestImageBuild/serial/BuildWithDockerIgnore 1.02
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.82
187 TestJSONOutput/start/Command 58.7
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.51
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.5
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.46
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.19
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 97.35
219 TestMountStart/serial/StartWithMountFirst 30.01
220 TestMountStart/serial/VerifyMountFirst 0.37
221 TestMountStart/serial/StartWithMountSecond 28.77
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.67
224 TestMountStart/serial/VerifyMountPostDelete 0.37
225 TestMountStart/serial/Stop 2.39
226 TestMountStart/serial/RestartStopped 24.1
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 122.88
231 TestMultiNode/serial/DeployApp2Nodes 4.79
232 TestMultiNode/serial/PingHostFrom2Pods 0.79
233 TestMultiNode/serial/AddNode 57.1
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.21
236 TestMultiNode/serial/CopyFile 6.97
237 TestMultiNode/serial/StopNode 3.19
238 TestMultiNode/serial/StartAfterStop 41.5
239 TestMultiNode/serial/RestartKeepsNodes 168.97
240 TestMultiNode/serial/DeleteNode 2.11
241 TestMultiNode/serial/StopMultiNode 24.92
242 TestMultiNode/serial/RestartMultiNode 111.85
243 TestMultiNode/serial/ValidateNameConflict 50.16
248 TestPreload 249.8
250 TestScheduledStopUnix 119.59
251 TestSkaffold 125.05
254 TestRunningBinaryUpgrade 196.28
256 TestKubernetesUpgrade 179.81
258 TestStoppedBinaryUpgrade/Setup 2.24
260 TestPause/serial/Start 89.05
261 TestStoppedBinaryUpgrade/Upgrade 168.37
262 TestPause/serial/SecondStartNoReconfiguration 54.33
270 TestPause/serial/Pause 0.7
271 TestPause/serial/VerifyStatus 0.27
272 TestPause/serial/Unpause 0.6
273 TestPause/serial/PauseAgain 0.78
274 TestPause/serial/DeletePaused 0.81
275 TestPause/serial/VerifyDeletedResources 1.48
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
278 TestNoKubernetes/serial/StartWithK8s 54.18
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
291 TestNoKubernetes/serial/StartWithStopK8s 69.53
292 TestNoKubernetes/serial/Start 31.48
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
294 TestNoKubernetes/serial/ProfileList 0.93
295 TestNoKubernetes/serial/Stop 2.49
296 TestNoKubernetes/serial/StartNoArgs 62.31
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
299 TestStartStop/group/old-k8s-version/serial/FirstStart 207.55
301 TestStartStop/group/no-preload/serial/FirstStart 119.69
303 TestStartStop/group/embed-certs/serial/FirstStart 93.84
304 TestStartStop/group/no-preload/serial/DeployApp 9.4
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.31
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
308 TestStartStop/group/no-preload/serial/Stop 13.38
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/no-preload/serial/SecondStart 315.87
311 TestStartStop/group/old-k8s-version/serial/DeployApp 11.61
312 TestStartStop/group/embed-certs/serial/DeployApp 9.32
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.82
314 TestStartStop/group/old-k8s-version/serial/Stop 12.72
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
316 TestStartStop/group/embed-certs/serial/Stop 13.31
317 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
318 TestStartStop/group/old-k8s-version/serial/SecondStart 397.08
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
320 TestStartStop/group/embed-certs/serial/SecondStart 366.17
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.59
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 295.3
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
329 TestStartStop/group/no-preload/serial/Pause 2.3
331 TestStartStop/group/newest-cni/serial/FirstStart 58.1
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.75
335 TestStartStop/group/newest-cni/serial/Stop 8.29
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.54
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
340 TestStartStop/group/newest-cni/serial/SecondStart 40.46
341 TestNetworkPlugins/group/auto/Start 90.05
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
345 TestStartStop/group/embed-certs/serial/Pause 2.31
346 TestNetworkPlugins/group/kindnet/Start 114.37
347 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
348 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
352 TestStartStop/group/newest-cni/serial/Pause 2.09
353 TestNetworkPlugins/group/calico/Start 122.97
354 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
355 TestStartStop/group/old-k8s-version/serial/Pause 2.13
356 TestNetworkPlugins/group/custom-flannel/Start 130.05
357 TestNetworkPlugins/group/auto/KubeletFlags 0.2
358 TestNetworkPlugins/group/auto/NetCatPod 10.23
359 TestNetworkPlugins/group/auto/DNS 0.19
360 TestNetworkPlugins/group/auto/Localhost 0.16
361 TestNetworkPlugins/group/auto/HairPin 0.14
362 TestNetworkPlugins/group/false/Start 108.13
363 TestNetworkPlugins/group/kindnet/ControllerPod 6
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
365 TestNetworkPlugins/group/kindnet/NetCatPod 12.21
366 TestNetworkPlugins/group/kindnet/DNS 0.2
367 TestNetworkPlugins/group/kindnet/Localhost 0.18
368 TestNetworkPlugins/group/kindnet/HairPin 0.13
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.24
371 TestNetworkPlugins/group/calico/NetCatPod 11.27
372 TestNetworkPlugins/group/enable-default-cni/Start 63.58
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.25
375 TestNetworkPlugins/group/calico/DNS 0.19
376 TestNetworkPlugins/group/calico/Localhost 0.15
377 TestNetworkPlugins/group/calico/HairPin 0.14
378 TestNetworkPlugins/group/custom-flannel/DNS 0.2
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
381 TestNetworkPlugins/group/flannel/Start 73.13
382 TestNetworkPlugins/group/bridge/Start 118.37
383 TestNetworkPlugins/group/false/KubeletFlags 0.22
384 TestNetworkPlugins/group/false/NetCatPod 11.25
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
387 TestNetworkPlugins/group/false/DNS 0.18
388 TestNetworkPlugins/group/false/Localhost 0.13
389 TestNetworkPlugins/group/false/HairPin 0.13
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
393 TestNetworkPlugins/group/kubenet/Start 67.7
394 TestNetworkPlugins/group/flannel/ControllerPod 6.01
395 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
396 TestNetworkPlugins/group/flannel/NetCatPod 10.2
397 TestNetworkPlugins/group/flannel/DNS 0.21
398 TestNetworkPlugins/group/flannel/Localhost 0.15
399 TestNetworkPlugins/group/flannel/HairPin 0.14
400 TestNetworkPlugins/group/kubenet/KubeletFlags 0.2
401 TestNetworkPlugins/group/kubenet/NetCatPod 13.22
402 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
403 TestNetworkPlugins/group/bridge/NetCatPod 12.25
404 TestNetworkPlugins/group/kubenet/DNS 0.15
405 TestNetworkPlugins/group/kubenet/Localhost 0.13
406 TestNetworkPlugins/group/kubenet/HairPin 0.13
407 TestNetworkPlugins/group/bridge/DNS 0.15
408 TestNetworkPlugins/group/bridge/Localhost 0.12
409 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (25.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-145371 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-145371 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (25.571354536s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-145371
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-145371: exit status 85 (55.638673ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-145371 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC |          |
	|         | -p download-only-145371        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:09:20
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:09:20.218021  382973 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:09:20.218151  382973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:09:20.218162  382973 out.go:358] Setting ErrFile to fd 2...
	I0916 17:09:20.218168  382973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:09:20.218379  382973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	W0916 17:09:20.218505  382973 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19649-375661/.minikube/config/config.json: open /home/jenkins/minikube-integration/19649-375661/.minikube/config/config.json: no such file or directory
	I0916 17:09:20.219043  382973 out.go:352] Setting JSON to true
	I0916 17:09:20.220015  382973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3104,"bootTime":1726503456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:09:20.220203  382973 start.go:139] virtualization: kvm guest
	I0916 17:09:20.222345  382973 out.go:97] [download-only-145371] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 17:09:20.222476  382973 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:09:20.222523  382973 notify.go:220] Checking for updates...
	I0916 17:09:20.223579  382973 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:09:20.224690  382973 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:09:20.225855  382973 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	I0916 17:09:20.226912  382973 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:09:20.227957  382973 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:09:20.229847  382973 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:09:20.230051  382973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:09:20.260354  382973 out.go:97] Using the kvm2 driver based on user configuration
	I0916 17:09:20.260379  382973 start.go:297] selected driver: kvm2
	I0916 17:09:20.260386  382973 start.go:901] validating driver "kvm2" against <nil>
	I0916 17:09:20.260731  382973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:09:20.260843  382973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-375661/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 17:09:20.274804  382973 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 17:09:20.274857  382973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:09:20.275353  382973 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 17:09:20.275532  382973 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:09:20.275572  382973 cni.go:84] Creating CNI manager for ""
	I0916 17:09:20.275638  382973 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0916 17:09:20.275700  382973 start.go:340] cluster config:
	{Name:download-only-145371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-145371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:09:20.275873  382973 iso.go:125] acquiring lock: {Name:mk520a410f89666950ce2caf9879a799775a7873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:09:20.277284  382973 out.go:97] Downloading VM boot image ...
	I0916 17:09:20.277329  382973 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19649-375661/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 17:09:34.110230  382973 out.go:97] Starting "download-only-145371" primary control-plane node in "download-only-145371" cluster
	I0916 17:09:34.110258  382973 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:09:34.210140  382973 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0916 17:09:34.210165  382973 cache.go:56] Caching tarball of preloaded images
	I0916 17:09:34.210300  382973 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:09:34.211746  382973 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 17:09:34.211758  382973 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:09:34.317924  382973 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0916 17:09:44.219568  382973 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:09:44.219652  382973 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:09:44.990490  382973 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0916 17:09:44.990841  382973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/download-only-145371/config.json ...
	I0916 17:09:44.990872  382973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/download-only-145371/config.json: {Name:mkf8c9dc5e42eb2291eb3ee23f0281a1381db7d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:09:44.991029  382973 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0916 17:09:44.991189  382973 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19649-375661/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-145371 host does not exist
	  To start a cluster, run: "minikube start -p download-only-145371"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-145371
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-590311 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-590311 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (10.910358746s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-590311
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-590311: exit status 85 (54.689291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-145371 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC |                     |
	|         | -p download-only-145371        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC | 16 Sep 24 17:09 UTC |
	| delete  | -p download-only-145371        | download-only-145371 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC | 16 Sep 24 17:09 UTC |
	| start   | -o=json --download-only        | download-only-590311 | jenkins | v1.34.0 | 16 Sep 24 17:09 UTC |                     |
	|         | -p download-only-590311        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:09:46
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:09:46.088592  383225 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:09:46.088824  383225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:09:46.088837  383225 out.go:358] Setting ErrFile to fd 2...
	I0916 17:09:46.088841  383225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:09:46.088994  383225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:09:46.089497  383225 out.go:352] Setting JSON to true
	I0916 17:09:46.090369  383225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3130,"bootTime":1726503456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:09:46.090467  383225 start.go:139] virtualization: kvm guest
	I0916 17:09:46.092082  383225 out.go:97] [download-only-590311] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:09:46.092232  383225 notify.go:220] Checking for updates...
	I0916 17:09:46.093255  383225 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:09:46.094302  383225 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:09:46.095499  383225 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	I0916 17:09:46.096422  383225 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:09:46.097448  383225 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:09:46.099228  383225 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:09:46.099476  383225 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:09:46.128884  383225 out.go:97] Using the kvm2 driver based on user configuration
	I0916 17:09:46.128902  383225 start.go:297] selected driver: kvm2
	I0916 17:09:46.128907  383225 start.go:901] validating driver "kvm2" against <nil>
	I0916 17:09:46.129216  383225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:09:46.129284  383225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-375661/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 17:09:46.142793  383225 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 17:09:46.142830  383225 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:09:46.143281  383225 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 17:09:46.143407  383225 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:09:46.143434  383225 cni.go:84] Creating CNI manager for ""
	I0916 17:09:46.143479  383225 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0916 17:09:46.143491  383225 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:09:46.143540  383225 start.go:340] cluster config:
	{Name:download-only-590311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-590311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:09:46.143646  383225 iso.go:125] acquiring lock: {Name:mk520a410f89666950ce2caf9879a799775a7873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:09:46.144769  383225 out.go:97] Starting "download-only-590311" primary control-plane node in "download-only-590311" cluster
	I0916 17:09:46.144780  383225 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:09:46.658647  383225 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 17:09:46.658676  383225 cache.go:56] Caching tarball of preloaded images
	I0916 17:09:46.658864  383225 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:09:46.660328  383225 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 17:09:46.660345  383225 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:09:46.762062  383225 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0916 17:09:55.391288  383225 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:09:55.391378  383225 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19649-375661/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0916 17:09:56.049348  383225 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0916 17:09:56.049715  383225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/download-only-590311/config.json ...
	I0916 17:09:56.049746  383225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/download-only-590311/config.json: {Name:mka79d0a580bb060a044f7f0897f803754a1871f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:09:56.049892  383225 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0916 17:09:56.050019  383225 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19649-375661/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-590311 host does not exist
	  To start a cluster, run: "minikube start -p download-only-590311"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-590311
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-881316 --alsologtostderr --binary-mirror http://127.0.0.1:40603 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-881316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-881316
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (97.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-419042 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-419042 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m36.815900531s)
helpers_test.go:175: Cleaning up "offline-docker-419042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-419042
--- PASS: TestOffline (97.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214113
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-214113: exit status 85 (48.266729ms)

                                                
                                                
-- stdout --
	* Profile "addons-214113" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214113"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214113
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-214113: exit status 85 (49.31366ms)

                                                
                                                
-- stdout --
	* Profile "addons-214113" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214113"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (217.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-214113 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-214113 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m37.838017093s)
--- PASS: TestAddons/Setup (217.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.67s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 11.504915ms
addons_test.go:913: volcano-controller stabilized in 11.555774ms
addons_test.go:905: volcano-admission stabilized in 11.615104ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-npvxw" [6a0d7f7b-f3cc-467b-a3f6-93facb111099] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003970933s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-lf2vz" [afd36228-a85d-49aa-830b-a391b5cdda6e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004211391s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-qc4s9" [ca33bf08-5931-4b31-85cc-e84aa1f9a1ea] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003411661s
addons_test.go:932: (dbg) Run:  kubectl --context addons-214113 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-214113 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-214113 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [6cd901ce-4606-4207-abca-08dd594ea285] Pending
helpers_test.go:344: "test-job-nginx-0" [6cd901ce-4606-4207-abca-08dd594ea285] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [6cd901ce-4606-4207-abca-08dd594ea285] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003656741s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable volcano --alsologtostderr -v=1: (10.297512226s)
--- PASS: TestAddons/serial/Volcano (41.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-214113 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-214113 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-214113 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-214113 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-214113 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [49295f8d-d03e-430c-b6af-567789f716fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [49295f8d-d03e-430c-b6af-567789f716fe] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005861328s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-214113 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.110
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable ingress-dns --alsologtostderr -v=1: (1.440838638s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable ingress --alsologtostderr -v=1: (7.66248063s)
--- PASS: TestAddons/parallel/Ingress (21.16s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nsjl9" [9375e11d-c919-4d3a-8a76-99e17302cd1d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004220098s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-214113
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-214113: (5.705271735s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.80561ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vbk42" [6c2237a0-5a07-4a63-95b2-765b42ce9480] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003258234s
addons_test.go:417: (dbg) Run:  kubectl --context addons-214113 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.877228ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-dnc5m" [0f4e2597-a4d1-4c47-a5a2-79ef2c7607d4] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005470365s
addons_test.go:475: (dbg) Run:  kubectl --context addons-214113 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-214113 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.204059657s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.727678ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-214113 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-214113 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1cd8519f-0681-4972-a444-307a6b7c060e] Pending
helpers_test.go:344: "task-pv-pod" [1cd8519f-0681-4972-a444-307a6b7c060e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1cd8519f-0681-4972-a444-307a6b7c060e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004302529s
addons_test.go:590: (dbg) Run:  kubectl --context addons-214113 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-214113 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-214113 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-214113 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-214113 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-214113 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-214113 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d67b3bc1-cbfd-4d04-bdd0-f22d24da79e3] Pending
helpers_test.go:344: "task-pv-pod-restore" [d67b3bc1-cbfd-4d04-bdd0-f22d24da79e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d67b3bc1-cbfd-4d04-bdd0-f22d24da79e3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003422814s
addons_test.go:632: (dbg) Run:  kubectl --context addons-214113 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-214113 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-214113 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.872000778s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable volumesnapshots --alsologtostderr -v=1: (1.191505327s)
--- PASS: TestAddons/parallel/CSI (60.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-214113 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-c2gvx" [2b5d9172-c4ba-47d5-9600-d90afe5ae492] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-c2gvx" [2b5d9172-c4ba-47d5-9600-d90afe5ae492] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-c2gvx" [2b5d9172-c4ba-47d5-9600-d90afe5ae492] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004117854s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-lmz5w" [e08c34c6-8521-4bef-b044-e5d97e779a77] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003303095s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-214113
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-214113 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-214113 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eb9602af-129f-4c6c-98f4-dd2f9ed68ccc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eb9602af-129f-4c6c-98f4-dd2f9ed68ccc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eb9602af-129f-4c6c-98f4-dd2f9ed68ccc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002964922s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-214113 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 ssh "cat /opt/local-path-provisioner/pvc-17170fae-194c-46e0-85da-9bafa109dae7_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-214113 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-214113 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.586168719s)
--- PASS: TestAddons/parallel/LocalPath (55.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w467n" [c8c1c53c-2620-4519-9766-1b19808a63f0] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004736521s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-214113
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xthjc" [ced6cf6b-7bc8-42a1-9da9-2308bc04fab8] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003833282s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-214113 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-214113 addons disable yakd --alsologtostderr -v=1: (5.575745987s)
--- PASS: TestAddons/parallel/Yakd (10.58s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-214113
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-214113: (13.281594783s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-214113
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-214113
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-214113
--- PASS: TestAddons/StoppedEnableDisable (13.54s)

                                                
                                    
x
+
TestCertOptions (64.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-866352 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-866352 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m2.709504642s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-866352 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-866352 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-866352 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-866352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-866352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-866352: (1.677527786s)
--- PASS: TestCertOptions (64.85s)

                                                
                                    
x
+
TestCertExpiration (314.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-522970 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-522970 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m24.14855359s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-522970 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-522970 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (49.078746339s)
helpers_test.go:175: Cleaning up "cert-expiration-522970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-522970
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-522970: (1.151187303s)
--- PASS: TestCertExpiration (314.38s)

                                                
                                    
x
+
TestDockerFlags (65.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-271298 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0916 18:11:10.191209  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-271298 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m3.518027367s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-271298 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-271298 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-271298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-271298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-271298: (1.137182169s)
--- PASS: TestDockerFlags (65.14s)

                                                
                                    
x
+
TestForceSystemdFlag (93.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-162554 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-162554 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m32.658611759s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-162554 ssh "docker info --format {{.CgroupDriver}}"
E0916 18:11:08.909297  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:175: Cleaning up "force-systemd-flag-162554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-162554
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-162554: (1.065435276s)
--- PASS: TestForceSystemdFlag (93.96s)

                                                
                                    
x
+
TestForceSystemdEnv (54.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-526445 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-526445 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (53.003676152s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-526445 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-526445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-526445
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-526445: (1.163912268s)
--- PASS: TestForceSystemdEnv (54.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.69s)

                                                
                                    
x
+
TestErrorSpam/setup (46.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-140001 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-140001 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-140001 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-140001 --driver=kvm2 : (46.026402532s)
--- PASS: TestErrorSpam/setup (46.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 pause
--- PASS: TestErrorSpam/pause (1.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (14.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 stop: (12.427588834s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 stop: (1.149694825s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-140001 --log_dir /tmp/nospam-140001 stop: (1.344640092s)
--- PASS: TestErrorSpam/stop (14.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19649-375661/.minikube/files/etc/test/nested/copy/382962/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841551 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-841551 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m23.398273686s)
--- PASS: TestFunctional/serial/StartWithProxy (83.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841551 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-841551 --alsologtostderr -v=8: (36.382966699s)
functional_test.go:663: soft start took 36.383693117s for "functional-841551" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-841551 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-841551 /tmp/TestFunctionalserialCacheCmdcacheadd_local3748977556/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cache add minikube-local-cache-test:functional-841551
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-841551 cache add minikube-local-cache-test:functional-841551: (1.042327051s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cache delete minikube-local-cache-test:functional-841551
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-841551
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.399192ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 kubectl -- --context functional-841551 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-841551 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841551 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-841551 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.413715676s)
functional_test.go:761: restart took 40.413886821s for "functional-841551" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-841551 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 logs
--- PASS: TestFunctional/serial/LogsCmd (0.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 logs --file /tmp/TestFunctionalserialLogsFileCmd685971244/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-841551 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-841551
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-841551: exit status 115 (265.590884ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.22:30106 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-841551 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-841551 delete -f testdata/invalidsvc.yaml: (1.009974426s)
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 config get cpus: exit status 14 (46.667863ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 config get cpus: exit status 14 (45.431446ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-841551 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-841551 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 393268: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841551 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-841551 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (122.283014ms)

                                                
                                                
-- stdout --
	* [functional-841551] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:28:14.122167  393176 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:28:14.122258  393176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:28:14.122266  393176 out.go:358] Setting ErrFile to fd 2...
	I0916 17:28:14.122270  393176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:28:14.122478  393176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:28:14.123023  393176 out.go:352] Setting JSON to false
	I0916 17:28:14.124091  393176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4238,"bootTime":1726503456,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:28:14.124186  393176 start.go:139] virtualization: kvm guest
	I0916 17:28:14.125650  393176 out.go:177] * [functional-841551] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:28:14.126879  393176 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:28:14.126901  393176 notify.go:220] Checking for updates...
	I0916 17:28:14.128451  393176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:28:14.129328  393176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	I0916 17:28:14.130147  393176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:28:14.130991  393176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:28:14.131890  393176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:28:14.133222  393176 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:28:14.133611  393176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:28:14.133655  393176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:28:14.148894  393176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0916 17:28:14.149370  393176 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:28:14.150046  393176 main.go:141] libmachine: Using API Version  1
	I0916 17:28:14.150074  393176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:28:14.150369  393176 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:28:14.150543  393176 main.go:141] libmachine: (functional-841551) Calling .DriverName
	I0916 17:28:14.150741  393176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:28:14.151026  393176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:28:14.151086  393176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:28:14.165876  393176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43627
	I0916 17:28:14.166352  393176 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:28:14.166839  393176 main.go:141] libmachine: Using API Version  1
	I0916 17:28:14.166854  393176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:28:14.167172  393176 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:28:14.167351  393176 main.go:141] libmachine: (functional-841551) Calling .DriverName
	I0916 17:28:14.197634  393176 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 17:28:14.198453  393176 start.go:297] selected driver: kvm2
	I0916 17:28:14.198464  393176 start.go:901] validating driver "kvm2" against &{Name:functional-841551 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-841551 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:28:14.198552  393176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:28:14.200164  393176 out.go:201] 
	W0916 17:28:14.201001  393176 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 17:28:14.201972  393176 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841551 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-841551 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-841551 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (129.610488ms)

                                                
                                                
-- stdout --
	* [functional-841551] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:28:13.993992  393148 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:28:13.994261  393148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:28:13.994272  393148 out.go:358] Setting ErrFile to fd 2...
	I0916 17:28:13.994276  393148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:28:13.994516  393148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:28:13.995025  393148 out.go:352] Setting JSON to false
	I0916 17:28:13.995943  393148 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4238,"bootTime":1726503456,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:28:13.996049  393148 start.go:139] virtualization: kvm guest
	I0916 17:28:13.997852  393148 out.go:177] * [functional-841551] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 17:28:13.999005  393148 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:28:13.999014  393148 notify.go:220] Checking for updates...
	I0916 17:28:14.001322  393148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:28:14.002339  393148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	I0916 17:28:14.003271  393148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	I0916 17:28:14.004161  393148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:28:14.005014  393148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:28:14.006220  393148 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:28:14.006609  393148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:28:14.006661  393148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:28:14.021583  393148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0916 17:28:14.022099  393148 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:28:14.022662  393148 main.go:141] libmachine: Using API Version  1
	I0916 17:28:14.022722  393148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:28:14.023100  393148 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:28:14.023311  393148 main.go:141] libmachine: (functional-841551) Calling .DriverName
	I0916 17:28:14.023607  393148 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:28:14.023879  393148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:28:14.023918  393148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:28:14.038346  393148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I0916 17:28:14.038729  393148 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:28:14.039235  393148 main.go:141] libmachine: Using API Version  1
	I0916 17:28:14.039256  393148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:28:14.039574  393148 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:28:14.039844  393148 main.go:141] libmachine: (functional-841551) Calling .DriverName
	I0916 17:28:14.072155  393148 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 17:28:14.073131  393148 start.go:297] selected driver: kvm2
	I0916 17:28:14.073151  393148 start.go:901] validating driver "kvm2" against &{Name:functional-841551 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-841551 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:28:14.073283  393148 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:28:14.075359  393148 out.go:201] 
	W0916 17:28:14.076383  393148 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 17:28:14.077462  393148 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-841551 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-841551 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8rj2b" [c5a0a866-cd3e-470a-b096-cbb59408edbc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8rj2b" [c5a0a866-cd3e-470a-b096-cbb59408edbc] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.201695393s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.22:30605
functional_test.go:1675: http://192.168.39.22:30605: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-8rj2b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.22:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.22:30605
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7f0fa9c0-4516-4675-9748-f088c8c9fd0c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003580648s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-841551 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-841551 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-841551 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-841551 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [621435ab-9b49-4241-bd78-8f9dfc69b1fb] Pending
helpers_test.go:344: "sp-pod" [621435ab-9b49-4241-bd78-8f9dfc69b1fb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [621435ab-9b49-4241-bd78-8f9dfc69b1fb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.003787879s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-841551 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-841551 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-841551 delete -f testdata/storage-provisioner/pod.yaml: (1.24058797s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-841551 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92e2be6b-058f-45bb-a273-a33c46a87e61] Pending
helpers_test.go:344: "sp-pod" [92e2be6b-058f-45bb-a273-a33c46a87e61] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92e2be6b-058f-45bb-a273-a33c46a87e61] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003352877s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-841551 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh -n functional-841551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cp functional-841551:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd654363307/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh -n functional-841551 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh -n functional-841551 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-841551 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gt4tj" [3861817c-211e-4863-83ef-d64ae6809a25] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gt4tj" [3861817c-211e-4863-83ef-d64ae6809a25] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004911086s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;": exit status 1 (282.242986ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;": exit status 1 (164.958655ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;": exit status 1 (234.795607ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-841551 exec mysql-6cdb49bbb-gt4tj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/382962/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /etc/test/nested/copy/382962/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/382962.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /etc/ssl/certs/382962.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/382962.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /usr/share/ca-certificates/382962.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3829622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /etc/ssl/certs/3829622.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3829622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /usr/share/ca-certificates/3829622.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-841551 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh "sudo systemctl is-active crio": exit status 1 (214.020337ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841551 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-841551
docker.io/kicbase/echo-server:functional-841551
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841551 image ls --format short --alsologtostderr:
I0916 17:28:23.099944  394040 out.go:345] Setting OutFile to fd 1 ...
I0916 17:28:23.100063  394040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.100075  394040 out.go:358] Setting ErrFile to fd 2...
I0916 17:28:23.100082  394040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.100717  394040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
I0916 17:28:23.102074  394040 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.102281  394040 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.103206  394040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.103261  394040 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.120096  394040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36209
I0916 17:28:23.120557  394040 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.121252  394040 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.121278  394040 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.121699  394040 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.121912  394040 main.go:141] libmachine: (functional-841551) Calling .GetState
I0916 17:28:23.123783  394040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.123818  394040 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.140355  394040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
I0916 17:28:23.140765  394040 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.141365  394040 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.141389  394040 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.141739  394040 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.141992  394040 main.go:141] libmachine: (functional-841551) Calling .DriverName
I0916 17:28:23.142204  394040 ssh_runner.go:195] Run: systemctl --version
I0916 17:28:23.142236  394040 main.go:141] libmachine: (functional-841551) Calling .GetSSHHostname
I0916 17:28:23.146049  394040 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.146389  394040 main.go:141] libmachine: (functional-841551) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3f:8a", ip: ""} in network mk-functional-841551: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:06 +0000 UTC Type:0 Mac:52:54:00:3b:3f:8a Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-841551 Clientid:01:52:54:00:3b:3f:8a}
I0916 17:28:23.146413  394040 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined IP address 192.168.39.22 and MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.146546  394040 main.go:141] libmachine: (functional-841551) Calling .GetSSHPort
I0916 17:28:23.146706  394040 main.go:141] libmachine: (functional-841551) Calling .GetSSHKeyPath
I0916 17:28:23.146851  394040 main.go:141] libmachine: (functional-841551) Calling .GetSSHUsername
I0916 17:28:23.146982  394040 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/functional-841551/id_rsa Username:docker}
I0916 17:28:23.235515  394040 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 17:28:23.255988  394040 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.256004  394040 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.256282  394040 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.256303  394040 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
I0916 17:28:23.256313  394040 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:23.256322  394040 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.256330  394040 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.256556  394040 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.256568  394040 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:23.256586  394040 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841551 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/kicbase/echo-server               | functional-841551 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-841551 | 1e631920c8257 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841551 image ls --format table --alsologtostderr:
I0916 17:28:23.575439  394161 out.go:345] Setting OutFile to fd 1 ...
I0916 17:28:23.575559  394161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.575568  394161 out.go:358] Setting ErrFile to fd 2...
I0916 17:28:23.575572  394161 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.575748  394161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
I0916 17:28:23.576271  394161 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.576364  394161 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.576686  394161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.576724  394161 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.595089  394161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
I0916 17:28:23.595625  394161 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.596302  394161 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.596329  394161 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.600077  394161 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.600337  394161 main.go:141] libmachine: (functional-841551) Calling .GetState
I0916 17:28:23.602462  394161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.602523  394161 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.621690  394161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34513
I0916 17:28:23.622156  394161 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.622616  394161 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.622629  394161 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.622899  394161 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.623041  394161 main.go:141] libmachine: (functional-841551) Calling .DriverName
I0916 17:28:23.623166  394161 ssh_runner.go:195] Run: systemctl --version
I0916 17:28:23.623184  394161 main.go:141] libmachine: (functional-841551) Calling .GetSSHHostname
I0916 17:28:23.626815  394161 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.627426  394161 main.go:141] libmachine: (functional-841551) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3f:8a", ip: ""} in network mk-functional-841551: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:06 +0000 UTC Type:0 Mac:52:54:00:3b:3f:8a Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-841551 Clientid:01:52:54:00:3b:3f:8a}
I0916 17:28:23.627453  394161 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined IP address 192.168.39.22 and MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.627719  394161 main.go:141] libmachine: (functional-841551) Calling .GetSSHPort
I0916 17:28:23.627898  394161 main.go:141] libmachine: (functional-841551) Calling .GetSSHKeyPath
I0916 17:28:23.628028  394161 main.go:141] libmachine: (functional-841551) Calling .GetSSHUsername
I0916 17:28:23.628186  394161 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/functional-841551/id_rsa Username:docker}
I0916 17:28:23.715973  394161 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 17:28:23.741776  394161 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.741795  394161 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.742106  394161 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.742162  394161 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:23.742173  394161 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.742184  394161 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.742426  394161 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
I0916 17:28:23.742428  394161 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.742461  394161 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841551 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-841551"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c1
04e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"1880000
00"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"1e631920c8257e07608b4851bea11241bb96f6b0aa9e26f5509508b281e63532","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-841551"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841551 image ls --format json --alsologtostderr:
I0916 17:28:23.352389  394108 out.go:345] Setting OutFile to fd 1 ...
I0916 17:28:23.352715  394108 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.352728  394108 out.go:358] Setting ErrFile to fd 2...
I0916 17:28:23.352736  394108 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.353004  394108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
I0916 17:28:23.353851  394108 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.354010  394108 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.354425  394108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.354467  394108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.369257  394108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
I0916 17:28:23.369811  394108 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.370486  394108 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.370503  394108 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.370881  394108 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.371049  394108 main.go:141] libmachine: (functional-841551) Calling .GetState
I0916 17:28:23.373235  394108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.373286  394108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.387753  394108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
I0916 17:28:23.388173  394108 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.388737  394108 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.388758  394108 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.389069  394108 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.389282  394108 main.go:141] libmachine: (functional-841551) Calling .DriverName
I0916 17:28:23.389491  394108 ssh_runner.go:195] Run: systemctl --version
I0916 17:28:23.389516  394108 main.go:141] libmachine: (functional-841551) Calling .GetSSHHostname
I0916 17:28:23.392254  394108 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.392566  394108 main.go:141] libmachine: (functional-841551) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3f:8a", ip: ""} in network mk-functional-841551: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:06 +0000 UTC Type:0 Mac:52:54:00:3b:3f:8a Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-841551 Clientid:01:52:54:00:3b:3f:8a}
I0916 17:28:23.392596  394108 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined IP address 192.168.39.22 and MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.392773  394108 main.go:141] libmachine: (functional-841551) Calling .GetSSHPort
I0916 17:28:23.392970  394108 main.go:141] libmachine: (functional-841551) Calling .GetSSHKeyPath
I0916 17:28:23.393082  394108 main.go:141] libmachine: (functional-841551) Calling .GetSSHUsername
I0916 17:28:23.393258  394108 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/functional-841551/id_rsa Username:docker}
I0916 17:28:23.485353  394108 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 17:28:23.520721  394108 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.520764  394108 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.521065  394108 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
I0916 17:28:23.521106  394108 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.521113  394108 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:23.521121  394108 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.521129  394108 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.521370  394108 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
I0916 17:28:23.521406  394108 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.521417  394108 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-841551 image ls --format yaml --alsologtostderr:
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1e631920c8257e07608b4851bea11241bb96f6b0aa9e26f5509508b281e63532
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-841551
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-841551
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841551 image ls --format yaml --alsologtostderr:
I0916 17:28:23.150041  394062 out.go:345] Setting OutFile to fd 1 ...
I0916 17:28:23.150130  394062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.150137  394062 out.go:358] Setting ErrFile to fd 2...
I0916 17:28:23.150142  394062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.150307  394062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
I0916 17:28:23.150846  394062 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.150955  394062 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.151295  394062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.151331  394062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.166274  394062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35613
I0916 17:28:23.166708  394062 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.167237  394062 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.167259  394062 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.167623  394062 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.167832  394062 main.go:141] libmachine: (functional-841551) Calling .GetState
I0916 17:28:23.169590  394062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.169641  394062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.184060  394062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
I0916 17:28:23.184557  394062 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.185028  394062 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.185068  394062 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.185463  394062 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.185664  394062 main.go:141] libmachine: (functional-841551) Calling .DriverName
I0916 17:28:23.185890  394062 ssh_runner.go:195] Run: systemctl --version
I0916 17:28:23.185916  394062 main.go:141] libmachine: (functional-841551) Calling .GetSSHHostname
I0916 17:28:23.188587  394062 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.188961  394062 main.go:141] libmachine: (functional-841551) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3f:8a", ip: ""} in network mk-functional-841551: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:06 +0000 UTC Type:0 Mac:52:54:00:3b:3f:8a Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-841551 Clientid:01:52:54:00:3b:3f:8a}
I0916 17:28:23.188991  394062 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined IP address 192.168.39.22 and MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.189147  394062 main.go:141] libmachine: (functional-841551) Calling .GetSSHPort
I0916 17:28:23.189300  394062 main.go:141] libmachine: (functional-841551) Calling .GetSSHKeyPath
I0916 17:28:23.189461  394062 main.go:141] libmachine: (functional-841551) Calling .GetSSHUsername
I0916 17:28:23.189629  394062 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/functional-841551/id_rsa Username:docker}
I0916 17:28:23.267530  394062 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0916 17:28:23.296903  394062 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.296921  394062 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.297256  394062 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.297307  394062 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:23.297321  394062 main.go:141] libmachine: Making call to close driver server
I0916 17:28:23.297334  394062 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:23.297553  394062 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:23.297566  394062 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:23.297590  394062 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh pgrep buildkitd: exit status 1 (208.611858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image build -t localhost/my-image:functional-841551 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-841551 image build -t localhost/my-image:functional-841551 testdata/build --alsologtostderr: (3.130829036s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-841551 image build -t localhost/my-image:functional-841551 testdata/build --alsologtostderr:
I0916 17:28:23.516807  394149 out.go:345] Setting OutFile to fd 1 ...
I0916 17:28:23.516945  394149 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.516958  394149 out.go:358] Setting ErrFile to fd 2...
I0916 17:28:23.516966  394149 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 17:28:23.517244  394149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
I0916 17:28:23.517837  394149 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.518564  394149 config.go:182] Loaded profile config "functional-841551": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0916 17:28:23.519062  394149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.519112  394149 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.537071  394149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
I0916 17:28:23.538552  394149 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.539202  394149 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.539220  394149 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.539662  394149 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.539877  394149 main.go:141] libmachine: (functional-841551) Calling .GetState
I0916 17:28:23.541836  394149 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0916 17:28:23.541907  394149 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 17:28:23.559672  394149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
I0916 17:28:23.560002  394149 main.go:141] libmachine: () Calling .GetVersion
I0916 17:28:23.560616  394149 main.go:141] libmachine: Using API Version  1
I0916 17:28:23.560631  394149 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 17:28:23.560969  394149 main.go:141] libmachine: () Calling .GetMachineName
I0916 17:28:23.561180  394149 main.go:141] libmachine: (functional-841551) Calling .DriverName
I0916 17:28:23.561421  394149 ssh_runner.go:195] Run: systemctl --version
I0916 17:28:23.561452  394149 main.go:141] libmachine: (functional-841551) Calling .GetSSHHostname
I0916 17:28:23.564038  394149 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.564493  394149 main.go:141] libmachine: (functional-841551) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:3f:8a", ip: ""} in network mk-functional-841551: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:06 +0000 UTC Type:0 Mac:52:54:00:3b:3f:8a Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:functional-841551 Clientid:01:52:54:00:3b:3f:8a}
I0916 17:28:23.564522  394149 main.go:141] libmachine: (functional-841551) DBG | domain functional-841551 has defined IP address 192.168.39.22 and MAC address 52:54:00:3b:3f:8a in network mk-functional-841551
I0916 17:28:23.564641  394149 main.go:141] libmachine: (functional-841551) Calling .GetSSHPort
I0916 17:28:23.564768  394149 main.go:141] libmachine: (functional-841551) Calling .GetSSHKeyPath
I0916 17:28:23.564898  394149 main.go:141] libmachine: (functional-841551) Calling .GetSSHUsername
I0916 17:28:23.564990  394149 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/functional-841551/id_rsa Username:docker}
I0916 17:28:23.670788  394149 build_images.go:161] Building image from path: /tmp/build.2615717714.tar
I0916 17:28:23.670874  394149 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 17:28:23.681878  394149 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2615717714.tar
I0916 17:28:23.685507  394149 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2615717714.tar: stat -c "%s %y" /var/lib/minikube/build/build.2615717714.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2615717714.tar': No such file or directory
I0916 17:28:23.685531  394149 ssh_runner.go:362] scp /tmp/build.2615717714.tar --> /var/lib/minikube/build/build.2615717714.tar (3072 bytes)
I0916 17:28:23.707830  394149 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2615717714
I0916 17:28:23.717374  394149 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2615717714 -xf /var/lib/minikube/build/build.2615717714.tar
I0916 17:28:23.725704  394149 docker.go:360] Building image: /var/lib/minikube/build/build.2615717714
I0916 17:28:23.725758  394149 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-841551 /var/lib/minikube/build/build.2615717714
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:4d1bea955342cd67e217a57c18744d94ab2253c716dac637b669878ad4e5b7b1 done
#8 naming to localhost/my-image:functional-841551 done
#8 DONE 0.1s
I0916 17:28:26.577020  394149 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-841551 /var/lib/minikube/build/build.2615717714: (2.851232871s)
I0916 17:28:26.577137  394149 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2615717714
I0916 17:28:26.587691  394149 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2615717714.tar
I0916 17:28:26.595939  394149 build_images.go:217] Built localhost/my-image:functional-841551 from /tmp/build.2615717714.tar
I0916 17:28:26.595964  394149 build_images.go:133] succeeded building to: functional-841551
I0916 17:28:26.595989  394149 build_images.go:134] failed building to: 
I0916 17:28:26.596024  394149 main.go:141] libmachine: Making call to close driver server
I0916 17:28:26.596036  394149 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:26.596337  394149 main.go:141] libmachine: (functional-841551) DBG | Closing plugin on server side
I0916 17:28:26.596346  394149 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:26.596360  394149 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 17:28:26.596368  394149 main.go:141] libmachine: Making call to close driver server
I0916 17:28:26.596375  394149 main.go:141] libmachine: (functional-841551) Calling .Close
I0916 17:28:26.596583  394149 main.go:141] libmachine: Successfully made call to close driver server
I0916 17:28:26.596596  394149 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls
2024/09/16 17:28:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.726798972s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-841551
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-841551 docker-env) && out/minikube-linux-amd64 status -p functional-841551"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-841551 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image load --daemon kicbase/echo-server:functional-841551 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image load --daemon kicbase/echo-server:functional-841551 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-841551
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image load --daemon kicbase/echo-server:functional-841551 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image save kicbase/echo-server:functional-841551 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image rm kicbase/echo-server:functional-841551 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-841551
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 image save --daemon kicbase/echo-server:functional-841551 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-841551
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (26.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-841551 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-841551 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-z5jhw" [692d28c6-2dcf-4371-8ace-f68f47e54e47] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-z5jhw" [692d28c6-2dcf-4371-8ace-f68f47e54e47] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.002700444s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (26.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "247.259548ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.85194ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "259.922312ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.462157ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdany-port3654824679/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726507691931640914" to /tmp/TestFunctionalparallelMountCmdany-port3654824679/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726507691931640914" to /tmp/TestFunctionalparallelMountCmdany-port3654824679/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726507691931640914" to /tmp/TestFunctionalparallelMountCmdany-port3654824679/001/test-1726507691931640914
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.612499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 17:28 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 17:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 17:28 test-1726507691931640914
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh cat /mount-9p/test-1726507691931640914
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-841551 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [63f9d16e-2f17-407c-8246-3e6fdea9c5ce] Pending
helpers_test.go:344: "busybox-mount" [63f9d16e-2f17-407c-8246-3e6fdea9c5ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [63f9d16e-2f17-407c-8246-3e6fdea9c5ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [63f9d16e-2f17-407c-8246-3e6fdea9c5ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002866361s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-841551 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdany-port3654824679/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-841551 service list: (1.379270598s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdspecific-port4186417242/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.969169ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdspecific-port4186417242/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh "sudo umount -f /mount-9p": exit status 1 (195.959927ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-841551 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdspecific-port4186417242/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-841551 service list -o json: (1.243203687s)
functional_test.go:1494: Took "1.243321303s" to run "out/minikube-linux-amd64 -p functional-841551 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.22:32673
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620454942/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620454942/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620454942/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T" /mount1: exit status 1 (297.589205ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-841551 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620454942/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620454942/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-841551 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620454942/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-841551 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.22:32673
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-841551
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-841551
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-841551
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (211.09s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-657329 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-657329 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m3.450124585s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-657329 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-657329 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.965372519s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-657329 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-657329 addons enable gvisor: (5.110975135s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [e30cf785-854c-4df3-a515-4a2f285a83e3] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004385735s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-657329 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [66648fcd-3622-4655-a3c0-64d296c0dd60] Pending
helpers_test.go:344: "nginx-gvisor" [66648fcd-3622-4655-a3c0-64d296c0dd60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [66648fcd-3622-4655-a3c0-64d296c0dd60] Running
E0916 18:11:07.622054  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:07.628425  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:07.639767  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:07.661124  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:07.702492  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:07.783863  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:07.945469  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:08.267164  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 36.004317403s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-657329
E0916 18:11:12.753322  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:17.874646  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-657329: (6.562500369s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-657329 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-657329 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (58.736981126s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [e30cf785-854c-4df3-a515-4a2f285a83e3] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [e30cf785-854c-4df3-a515-4a2f285a83e3] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.003986001s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [66648fcd-3622-4655-a3c0-64d296c0dd60] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.003478455s
helpers_test.go:175: Cleaning up "gvisor-657329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-657329
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-657329: (1.094031965s)
--- PASS: TestGvisorAddon (211.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-037596 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0916 17:28:35.964154  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:35.971060  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:35.982390  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:36.003716  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:36.045090  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:36.126439  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:36.287917  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:36.609612  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:37.251492  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:38.533091  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:41.094465  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:46.216197  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:28:56.458189  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:29:16.939681  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:29:57.901831  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:31:19.823877  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-037596 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m35.290973152s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (215.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-037596 -- rollout status deployment/busybox: (3.443771868s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-m5jtp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-qln2f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-rfd8f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-m5jtp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-qln2f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-rfd8f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-m5jtp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-qln2f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-rfd8f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-m5jtp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-m5jtp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-qln2f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-qln2f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-rfd8f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-037596 -- exec busybox-7dff88458-rfd8f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-037596 -v=7 --alsologtostderr
E0916 17:32:45.907038  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:45.913453  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:45.924889  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:45.946381  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:45.987980  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:46.069464  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:46.231387  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:46.553624  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:47.195321  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:48.476802  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:51.039112  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:32:56.161270  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:33:06.402619  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-037596 -v=7 --alsologtostderr: (58.507705519s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-037596 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp testdata/cp-test.txt ha-037596:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile763453991/001/cp-test_ha-037596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596:/home/docker/cp-test.txt ha-037596-m02:/home/docker/cp-test_ha-037596_ha-037596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test_ha-037596_ha-037596-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596:/home/docker/cp-test.txt ha-037596-m03:/home/docker/cp-test_ha-037596_ha-037596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test_ha-037596_ha-037596-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596:/home/docker/cp-test.txt ha-037596-m04:/home/docker/cp-test_ha-037596_ha-037596-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test_ha-037596_ha-037596-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp testdata/cp-test.txt ha-037596-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile763453991/001/cp-test_ha-037596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m02:/home/docker/cp-test.txt ha-037596:/home/docker/cp-test_ha-037596-m02_ha-037596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test_ha-037596-m02_ha-037596.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m02:/home/docker/cp-test.txt ha-037596-m03:/home/docker/cp-test_ha-037596-m02_ha-037596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test_ha-037596-m02_ha-037596-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m02:/home/docker/cp-test.txt ha-037596-m04:/home/docker/cp-test_ha-037596-m02_ha-037596-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test_ha-037596-m02_ha-037596-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp testdata/cp-test.txt ha-037596-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile763453991/001/cp-test_ha-037596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m03:/home/docker/cp-test.txt ha-037596:/home/docker/cp-test_ha-037596-m03_ha-037596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test_ha-037596-m03_ha-037596.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m03:/home/docker/cp-test.txt ha-037596-m02:/home/docker/cp-test_ha-037596-m03_ha-037596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test_ha-037596-m03_ha-037596-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m03:/home/docker/cp-test.txt ha-037596-m04:/home/docker/cp-test_ha-037596-m03_ha-037596-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test_ha-037596-m03_ha-037596-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp testdata/cp-test.txt ha-037596-m04:/home/docker/cp-test.txt
E0916 17:33:26.884790  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile763453991/001/cp-test_ha-037596-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m04:/home/docker/cp-test.txt ha-037596:/home/docker/cp-test_ha-037596-m04_ha-037596.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596 "sudo cat /home/docker/cp-test_ha-037596-m04_ha-037596.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m04:/home/docker/cp-test.txt ha-037596-m02:/home/docker/cp-test_ha-037596-m04_ha-037596-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m02 "sudo cat /home/docker/cp-test_ha-037596-m04_ha-037596-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 cp ha-037596-m04:/home/docker/cp-test.txt ha-037596-m03:/home/docker/cp-test_ha-037596-m04_ha-037596-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 ssh -n ha-037596-m03 "sudo cat /home/docker/cp-test_ha-037596-m04_ha-037596-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 node stop m02 -v=7 --alsologtostderr
E0916 17:33:35.964528  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-037596 node stop m02 -v=7 --alsologtostderr: (12.576269226s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr: exit status 7 (597.88506ms)

                                                
                                                
-- stdout --
	ha-037596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-037596-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037596-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-037596-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:33:42.226790  398555 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:33:42.227064  398555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:33:42.227074  398555 out.go:358] Setting ErrFile to fd 2...
	I0916 17:33:42.227078  398555 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:33:42.227239  398555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:33:42.227404  398555 out.go:352] Setting JSON to false
	I0916 17:33:42.227435  398555 mustload.go:65] Loading cluster: ha-037596
	I0916 17:33:42.227480  398555 notify.go:220] Checking for updates...
	I0916 17:33:42.227828  398555 config.go:182] Loaded profile config "ha-037596": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:33:42.227843  398555 status.go:255] checking status of ha-037596 ...
	I0916 17:33:42.228251  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.228308  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.244180  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0916 17:33:42.244647  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.245433  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.245463  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.245830  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.246040  398555 main.go:141] libmachine: (ha-037596) Calling .GetState
	I0916 17:33:42.247908  398555 status.go:330] ha-037596 host status = "Running" (err=<nil>)
	I0916 17:33:42.247936  398555 host.go:66] Checking if "ha-037596" exists ...
	I0916 17:33:42.248319  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.248371  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.263281  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0916 17:33:42.263609  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.264029  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.264064  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.264354  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.264564  398555 main.go:141] libmachine: (ha-037596) Calling .GetIP
	I0916 17:33:42.267315  398555 main.go:141] libmachine: (ha-037596) DBG | domain ha-037596 has defined MAC address 52:54:00:94:43:79 in network mk-ha-037596
	I0916 17:33:42.267775  398555 main.go:141] libmachine: (ha-037596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:43:79", ip: ""} in network mk-ha-037596: {Iface:virbr1 ExpiryTime:2024-09-16 18:28:48 +0000 UTC Type:0 Mac:52:54:00:94:43:79 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-037596 Clientid:01:52:54:00:94:43:79}
	I0916 17:33:42.267805  398555 main.go:141] libmachine: (ha-037596) DBG | domain ha-037596 has defined IP address 192.168.39.6 and MAC address 52:54:00:94:43:79 in network mk-ha-037596
	I0916 17:33:42.267962  398555 host.go:66] Checking if "ha-037596" exists ...
	I0916 17:33:42.268243  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.268291  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.284670  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37329
	I0916 17:33:42.285011  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.285465  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.285491  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.285823  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.286059  398555 main.go:141] libmachine: (ha-037596) Calling .DriverName
	I0916 17:33:42.286253  398555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:33:42.286320  398555 main.go:141] libmachine: (ha-037596) Calling .GetSSHHostname
	I0916 17:33:42.288877  398555 main.go:141] libmachine: (ha-037596) DBG | domain ha-037596 has defined MAC address 52:54:00:94:43:79 in network mk-ha-037596
	I0916 17:33:42.289305  398555 main.go:141] libmachine: (ha-037596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:43:79", ip: ""} in network mk-ha-037596: {Iface:virbr1 ExpiryTime:2024-09-16 18:28:48 +0000 UTC Type:0 Mac:52:54:00:94:43:79 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-037596 Clientid:01:52:54:00:94:43:79}
	I0916 17:33:42.289333  398555 main.go:141] libmachine: (ha-037596) DBG | domain ha-037596 has defined IP address 192.168.39.6 and MAC address 52:54:00:94:43:79 in network mk-ha-037596
	I0916 17:33:42.289502  398555 main.go:141] libmachine: (ha-037596) Calling .GetSSHPort
	I0916 17:33:42.289684  398555 main.go:141] libmachine: (ha-037596) Calling .GetSSHKeyPath
	I0916 17:33:42.289827  398555 main.go:141] libmachine: (ha-037596) Calling .GetSSHUsername
	I0916 17:33:42.289961  398555 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/ha-037596/id_rsa Username:docker}
	I0916 17:33:42.374685  398555 ssh_runner.go:195] Run: systemctl --version
	I0916 17:33:42.383258  398555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:33:42.398336  398555 kubeconfig.go:125] found "ha-037596" server: "https://192.168.39.254:8443"
	I0916 17:33:42.398377  398555 api_server.go:166] Checking apiserver status ...
	I0916 17:33:42.398423  398555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:33:42.413229  398555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup
	W0916 17:33:42.421167  398555 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 17:33:42.421202  398555 ssh_runner.go:195] Run: ls
	I0916 17:33:42.424876  398555 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 17:33:42.430293  398555 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 17:33:42.430313  398555 status.go:422] ha-037596 apiserver status = Running (err=<nil>)
	I0916 17:33:42.430324  398555 status.go:257] ha-037596 status: &{Name:ha-037596 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:33:42.430339  398555 status.go:255] checking status of ha-037596-m02 ...
	I0916 17:33:42.430613  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.430651  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.445538  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42353
	I0916 17:33:42.446020  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.446557  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.446576  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.446884  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.447054  398555 main.go:141] libmachine: (ha-037596-m02) Calling .GetState
	I0916 17:33:42.448530  398555 status.go:330] ha-037596-m02 host status = "Stopped" (err=<nil>)
	I0916 17:33:42.448542  398555 status.go:343] host is not running, skipping remaining checks
	I0916 17:33:42.448548  398555 status.go:257] ha-037596-m02 status: &{Name:ha-037596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:33:42.448562  398555 status.go:255] checking status of ha-037596-m03 ...
	I0916 17:33:42.448848  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.448878  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.463382  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I0916 17:33:42.463788  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.464279  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.464305  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.464672  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.464850  398555 main.go:141] libmachine: (ha-037596-m03) Calling .GetState
	I0916 17:33:42.466229  398555 status.go:330] ha-037596-m03 host status = "Running" (err=<nil>)
	I0916 17:33:42.466248  398555 host.go:66] Checking if "ha-037596-m03" exists ...
	I0916 17:33:42.466520  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.466551  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.480962  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0916 17:33:42.481375  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.481779  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.481798  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.482059  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.482200  398555 main.go:141] libmachine: (ha-037596-m03) Calling .GetIP
	I0916 17:33:42.484593  398555 main.go:141] libmachine: (ha-037596-m03) DBG | domain ha-037596-m03 has defined MAC address 52:54:00:d2:c6:61 in network mk-ha-037596
	I0916 17:33:42.484944  398555 main.go:141] libmachine: (ha-037596-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:c6:61", ip: ""} in network mk-ha-037596: {Iface:virbr1 ExpiryTime:2024-09-16 18:31:01 +0000 UTC Type:0 Mac:52:54:00:d2:c6:61 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-037596-m03 Clientid:01:52:54:00:d2:c6:61}
	I0916 17:33:42.484966  398555 main.go:141] libmachine: (ha-037596-m03) DBG | domain ha-037596-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:d2:c6:61 in network mk-ha-037596
	I0916 17:33:42.485143  398555 host.go:66] Checking if "ha-037596-m03" exists ...
	I0916 17:33:42.485415  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.485445  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.499279  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0916 17:33:42.499690  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.500171  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.500194  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.500518  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.500730  398555 main.go:141] libmachine: (ha-037596-m03) Calling .DriverName
	I0916 17:33:42.500918  398555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:33:42.500956  398555 main.go:141] libmachine: (ha-037596-m03) Calling .GetSSHHostname
	I0916 17:33:42.503726  398555 main.go:141] libmachine: (ha-037596-m03) DBG | domain ha-037596-m03 has defined MAC address 52:54:00:d2:c6:61 in network mk-ha-037596
	I0916 17:33:42.504268  398555 main.go:141] libmachine: (ha-037596-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:c6:61", ip: ""} in network mk-ha-037596: {Iface:virbr1 ExpiryTime:2024-09-16 18:31:01 +0000 UTC Type:0 Mac:52:54:00:d2:c6:61 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-037596-m03 Clientid:01:52:54:00:d2:c6:61}
	I0916 17:33:42.504295  398555 main.go:141] libmachine: (ha-037596-m03) DBG | domain ha-037596-m03 has defined IP address 192.168.39.125 and MAC address 52:54:00:d2:c6:61 in network mk-ha-037596
	I0916 17:33:42.504412  398555 main.go:141] libmachine: (ha-037596-m03) Calling .GetSSHPort
	I0916 17:33:42.504580  398555 main.go:141] libmachine: (ha-037596-m03) Calling .GetSSHKeyPath
	I0916 17:33:42.504730  398555 main.go:141] libmachine: (ha-037596-m03) Calling .GetSSHUsername
	I0916 17:33:42.504861  398555 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/ha-037596-m03/id_rsa Username:docker}
	I0916 17:33:42.580462  398555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:33:42.595551  398555 kubeconfig.go:125] found "ha-037596" server: "https://192.168.39.254:8443"
	I0916 17:33:42.595575  398555 api_server.go:166] Checking apiserver status ...
	I0916 17:33:42.595609  398555 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:33:42.608932  398555 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1731/cgroup
	W0916 17:33:42.616843  398555 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1731/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 17:33:42.616884  398555 ssh_runner.go:195] Run: ls
	I0916 17:33:42.620607  398555 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 17:33:42.624524  398555 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 17:33:42.624553  398555 status.go:422] ha-037596-m03 apiserver status = Running (err=<nil>)
	I0916 17:33:42.624567  398555 status.go:257] ha-037596-m03 status: &{Name:ha-037596-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:33:42.624595  398555 status.go:255] checking status of ha-037596-m04 ...
	I0916 17:33:42.624968  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.625013  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.639971  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0916 17:33:42.640382  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.640862  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.640880  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.641192  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.641373  398555 main.go:141] libmachine: (ha-037596-m04) Calling .GetState
	I0916 17:33:42.642615  398555 status.go:330] ha-037596-m04 host status = "Running" (err=<nil>)
	I0916 17:33:42.642634  398555 host.go:66] Checking if "ha-037596-m04" exists ...
	I0916 17:33:42.642911  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.642962  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.658148  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I0916 17:33:42.658557  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.658955  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.658976  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.659315  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.659515  398555 main.go:141] libmachine: (ha-037596-m04) Calling .GetIP
	I0916 17:33:42.662048  398555 main.go:141] libmachine: (ha-037596-m04) DBG | domain ha-037596-m04 has defined MAC address 52:54:00:ec:3f:ff in network mk-ha-037596
	I0916 17:33:42.662448  398555 main.go:141] libmachine: (ha-037596-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3f:ff", ip: ""} in network mk-ha-037596: {Iface:virbr1 ExpiryTime:2024-09-16 18:32:32 +0000 UTC Type:0 Mac:52:54:00:ec:3f:ff Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-037596-m04 Clientid:01:52:54:00:ec:3f:ff}
	I0916 17:33:42.662475  398555 main.go:141] libmachine: (ha-037596-m04) DBG | domain ha-037596-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ec:3f:ff in network mk-ha-037596
	I0916 17:33:42.662603  398555 host.go:66] Checking if "ha-037596-m04" exists ...
	I0916 17:33:42.662882  398555 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:33:42.662921  398555 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:33:42.677160  398555 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0916 17:33:42.677485  398555 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:33:42.677925  398555 main.go:141] libmachine: Using API Version  1
	I0916 17:33:42.677947  398555 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:33:42.678218  398555 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:33:42.678387  398555 main.go:141] libmachine: (ha-037596-m04) Calling .DriverName
	I0916 17:33:42.678557  398555 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:33:42.678579  398555 main.go:141] libmachine: (ha-037596-m04) Calling .GetSSHHostname
	I0916 17:33:42.680752  398555 main.go:141] libmachine: (ha-037596-m04) DBG | domain ha-037596-m04 has defined MAC address 52:54:00:ec:3f:ff in network mk-ha-037596
	I0916 17:33:42.681152  398555 main.go:141] libmachine: (ha-037596-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:3f:ff", ip: ""} in network mk-ha-037596: {Iface:virbr1 ExpiryTime:2024-09-16 18:32:32 +0000 UTC Type:0 Mac:52:54:00:ec:3f:ff Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-037596-m04 Clientid:01:52:54:00:ec:3f:ff}
	I0916 17:33:42.681178  398555 main.go:141] libmachine: (ha-037596-m04) DBG | domain ha-037596-m04 has defined IP address 192.168.39.76 and MAC address 52:54:00:ec:3f:ff in network mk-ha-037596
	I0916 17:33:42.681302  398555 main.go:141] libmachine: (ha-037596-m04) Calling .GetSSHPort
	I0916 17:33:42.681449  398555 main.go:141] libmachine: (ha-037596-m04) Calling .GetSSHKeyPath
	I0916 17:33:42.681571  398555 main.go:141] libmachine: (ha-037596-m04) Calling .GetSSHUsername
	I0916 17:33:42.681688  398555 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/ha-037596-m04/id_rsa Username:docker}
	I0916 17:33:42.759843  398555 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:33:42.775793  398555 status.go:257] ha-037596-m04 status: &{Name:ha-037596-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 node start m02 -v=7 --alsologtostderr
E0916 17:34:03.666982  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:34:07.846760  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-037596 node start m02 -v=7 --alsologtostderr: (41.631938234s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-037596 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-037596 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-037596 -v=7 --alsologtostderr: (40.436647692s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-037596 --wait=true -v=7 --alsologtostderr
E0916 17:35:29.768574  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:37:45.907572  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:38:13.609871  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-037596 --wait=true -v=7 --alsologtostderr: (3m7.084711119s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-037596
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-037596 node delete m03 -v=7 --alsologtostderr: (6.170988173s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 stop -v=7 --alsologtostderr
E0916 17:38:35.964113  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-037596 stop -v=7 --alsologtostderr: (37.983926361s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr: exit status 7 (99.369881ms)

                                                
                                                
-- stdout --
	ha-037596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037596-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037596-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:38:59.020913  400887 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:38:59.021180  400887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:38:59.021189  400887 out.go:358] Setting ErrFile to fd 2...
	I0916 17:38:59.021194  400887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:38:59.021360  400887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:38:59.021505  400887 out.go:352] Setting JSON to false
	I0916 17:38:59.021533  400887 mustload.go:65] Loading cluster: ha-037596
	I0916 17:38:59.021581  400887 notify.go:220] Checking for updates...
	I0916 17:38:59.021896  400887 config.go:182] Loaded profile config "ha-037596": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:38:59.021911  400887 status.go:255] checking status of ha-037596 ...
	I0916 17:38:59.022293  400887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:38:59.022328  400887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:38:59.040169  400887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0916 17:38:59.040680  400887 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:38:59.041310  400887 main.go:141] libmachine: Using API Version  1
	I0916 17:38:59.041357  400887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:38:59.041676  400887 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:38:59.041873  400887 main.go:141] libmachine: (ha-037596) Calling .GetState
	I0916 17:38:59.043418  400887 status.go:330] ha-037596 host status = "Stopped" (err=<nil>)
	I0916 17:38:59.043431  400887 status.go:343] host is not running, skipping remaining checks
	I0916 17:38:59.043437  400887 status.go:257] ha-037596 status: &{Name:ha-037596 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:38:59.043451  400887 status.go:255] checking status of ha-037596-m02 ...
	I0916 17:38:59.043817  400887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:38:59.043860  400887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:38:59.057866  400887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0916 17:38:59.058272  400887 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:38:59.058743  400887 main.go:141] libmachine: Using API Version  1
	I0916 17:38:59.058773  400887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:38:59.059082  400887 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:38:59.059270  400887 main.go:141] libmachine: (ha-037596-m02) Calling .GetState
	I0916 17:38:59.060489  400887 status.go:330] ha-037596-m02 host status = "Stopped" (err=<nil>)
	I0916 17:38:59.060503  400887 status.go:343] host is not running, skipping remaining checks
	I0916 17:38:59.060508  400887 status.go:257] ha-037596-m02 status: &{Name:ha-037596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:38:59.060523  400887 status.go:255] checking status of ha-037596-m04 ...
	I0916 17:38:59.060815  400887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:38:59.060874  400887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:38:59.074180  400887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0916 17:38:59.074492  400887 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:38:59.074916  400887 main.go:141] libmachine: Using API Version  1
	I0916 17:38:59.074937  400887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:38:59.075241  400887 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:38:59.075396  400887 main.go:141] libmachine: (ha-037596-m04) Calling .GetState
	I0916 17:38:59.076574  400887 status.go:330] ha-037596-m04 host status = "Stopped" (err=<nil>)
	I0916 17:38:59.076587  400887 status.go:343] host is not running, skipping remaining checks
	I0916 17:38:59.076603  400887 status.go:257] ha-037596-m04 status: &{Name:ha-037596-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (156.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-037596 --wait=true -v=7 --alsologtostderr --driver=kvm2 
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-037596 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m35.738460534s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (156.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-037596 --control-plane -v=7 --alsologtostderr
E0916 17:42:45.907893  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-037596 --control-plane -v=7 --alsologtostderr: (1m21.820092426s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-037596 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (46.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-445088 --driver=kvm2 
E0916 17:43:35.964547  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-445088 --driver=kvm2 : (46.024316139s)
--- PASS: TestImageBuild/serial/Setup (46.02s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-445088
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-445088: (2.748013579s)
--- PASS: TestImageBuild/serial/NormalBuild (2.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-445088
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-445088: (1.185614495s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-445088
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-445088: (1.017195982s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-445088
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-265596 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-265596 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (58.702936s)
--- PASS: TestJSONOutput/start/Command (58.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-265596 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-265596 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-265596 --output=json --user=testUser
E0916 17:44:59.029198  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-265596 --output=json --user=testUser: (7.455805243s)
--- PASS: TestJSONOutput/stop/Command (7.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-753001 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-753001 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.851404ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"df98ccba-a8db-4bf3-b0c1-550142f9df59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-753001] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02b901ff-b0d9-41d8-884c-b81e3f99db1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"ca9459a1-2653-4625-819d-181039b3693c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c4a70bcd-6006-48d4-ac31-5a55b384cd29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig"}}
	{"specversion":"1.0","id":"c2583556-e2b8-46a9-95c6-751e47533360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube"}}
	{"specversion":"1.0","id":"80a32d35-e4c3-4300-b0c6-aac2c90759da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1b233dc2-604d-406d-9107-99458dcf78f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7be58d68-15a9-468b-a11a-ef9ebde01bb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-753001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-753001
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (97.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-346528 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-346528 --driver=kvm2 : (45.578540914s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-357485 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-357485 --driver=kvm2 : (49.20939662s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-346528
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-357485
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-357485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-357485
helpers_test.go:175: Cleaning up "first-346528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-346528
--- PASS: TestMinikubeProfile (97.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-217057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-217057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.010911021s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-217057 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-217057 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-232902 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-232902 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.765367524s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-232902 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-232902 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-217057 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-232902 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-232902 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-232902
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-232902: (2.38874455s)
--- PASS: TestMountStart/serial/Stop (2.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-232902
E0916 17:47:45.907880  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-232902: (23.097990991s)
--- PASS: TestMountStart/serial/RestartStopped (24.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-232902 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-232902 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602212 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0916 17:48:35.964112  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:49:08.971238  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602212 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m2.493582632s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-602212 -- rollout status deployment/busybox: (3.378856936s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-7l5ss -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-qrjrq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-7l5ss -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-qrjrq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-7l5ss -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-qrjrq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-7l5ss -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-7l5ss -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-qrjrq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-602212 -- exec busybox-7dff88458-qrjrq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-602212 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-602212 -v 3 --alsologtostderr: (56.560353679s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-602212 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp testdata/cp-test.txt multinode-602212:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile300202669/001/cp-test_multinode-602212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212:/home/docker/cp-test.txt multinode-602212-m02:/home/docker/cp-test_multinode-602212_multinode-602212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m02 "sudo cat /home/docker/cp-test_multinode-602212_multinode-602212-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212:/home/docker/cp-test.txt multinode-602212-m03:/home/docker/cp-test_multinode-602212_multinode-602212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m03 "sudo cat /home/docker/cp-test_multinode-602212_multinode-602212-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp testdata/cp-test.txt multinode-602212-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile300202669/001/cp-test_multinode-602212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212-m02:/home/docker/cp-test.txt multinode-602212:/home/docker/cp-test_multinode-602212-m02_multinode-602212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212 "sudo cat /home/docker/cp-test_multinode-602212-m02_multinode-602212.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212-m02:/home/docker/cp-test.txt multinode-602212-m03:/home/docker/cp-test_multinode-602212-m02_multinode-602212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m03 "sudo cat /home/docker/cp-test_multinode-602212-m02_multinode-602212-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp testdata/cp-test.txt multinode-602212-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile300202669/001/cp-test_multinode-602212-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212-m03:/home/docker/cp-test.txt multinode-602212:/home/docker/cp-test_multinode-602212-m03_multinode-602212.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212 "sudo cat /home/docker/cp-test_multinode-602212-m03_multinode-602212.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 cp multinode-602212-m03:/home/docker/cp-test.txt multinode-602212-m02:/home/docker/cp-test_multinode-602212-m03_multinode-602212-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 ssh -n multinode-602212-m02 "sudo cat /home/docker/cp-test_multinode-602212-m03_multinode-602212-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-602212 node stop m03: (2.381409816s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602212 status: exit status 7 (402.714853ms)

                                                
                                                
-- stdout --
	multinode-602212
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-602212-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-602212-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr: exit status 7 (403.815137ms)

                                                
                                                
-- stdout --
	multinode-602212
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-602212-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-602212-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:51:23.948490  409178 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:51:23.948745  409178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:51:23.948756  409178 out.go:358] Setting ErrFile to fd 2...
	I0916 17:51:23.948760  409178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:51:23.948964  409178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:51:23.949178  409178 out.go:352] Setting JSON to false
	I0916 17:51:23.949209  409178 mustload.go:65] Loading cluster: multinode-602212
	I0916 17:51:23.949304  409178 notify.go:220] Checking for updates...
	I0916 17:51:23.949676  409178 config.go:182] Loaded profile config "multinode-602212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:51:23.949692  409178 status.go:255] checking status of multinode-602212 ...
	I0916 17:51:23.950103  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:23.950151  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:23.965502  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I0916 17:51:23.965964  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:23.966628  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:23.966658  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:23.966979  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:23.967157  409178 main.go:141] libmachine: (multinode-602212) Calling .GetState
	I0916 17:51:23.968515  409178 status.go:330] multinode-602212 host status = "Running" (err=<nil>)
	I0916 17:51:23.968530  409178 host.go:66] Checking if "multinode-602212" exists ...
	I0916 17:51:23.968834  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:23.968874  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:23.983405  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0916 17:51:23.983724  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:23.984096  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:23.984124  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:23.984420  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:23.984615  409178 main.go:141] libmachine: (multinode-602212) Calling .GetIP
	I0916 17:51:23.987112  409178 main.go:141] libmachine: (multinode-602212) DBG | domain multinode-602212 has defined MAC address 52:54:00:91:b7:c1 in network mk-multinode-602212
	I0916 17:51:23.987461  409178 main.go:141] libmachine: (multinode-602212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b7:c1", ip: ""} in network mk-multinode-602212: {Iface:virbr1 ExpiryTime:2024-09-16 18:48:21 +0000 UTC Type:0 Mac:52:54:00:91:b7:c1 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-602212 Clientid:01:52:54:00:91:b7:c1}
	I0916 17:51:23.987499  409178 main.go:141] libmachine: (multinode-602212) DBG | domain multinode-602212 has defined IP address 192.168.39.107 and MAC address 52:54:00:91:b7:c1 in network mk-multinode-602212
	I0916 17:51:23.987615  409178 host.go:66] Checking if "multinode-602212" exists ...
	I0916 17:51:23.987868  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:23.987904  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:24.002020  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0916 17:51:24.002386  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:24.002796  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:24.002809  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:24.003153  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:24.003306  409178 main.go:141] libmachine: (multinode-602212) Calling .DriverName
	I0916 17:51:24.003460  409178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:51:24.003484  409178 main.go:141] libmachine: (multinode-602212) Calling .GetSSHHostname
	I0916 17:51:24.005840  409178 main.go:141] libmachine: (multinode-602212) DBG | domain multinode-602212 has defined MAC address 52:54:00:91:b7:c1 in network mk-multinode-602212
	I0916 17:51:24.006246  409178 main.go:141] libmachine: (multinode-602212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:b7:c1", ip: ""} in network mk-multinode-602212: {Iface:virbr1 ExpiryTime:2024-09-16 18:48:21 +0000 UTC Type:0 Mac:52:54:00:91:b7:c1 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-602212 Clientid:01:52:54:00:91:b7:c1}
	I0916 17:51:24.006280  409178 main.go:141] libmachine: (multinode-602212) DBG | domain multinode-602212 has defined IP address 192.168.39.107 and MAC address 52:54:00:91:b7:c1 in network mk-multinode-602212
	I0916 17:51:24.006413  409178 main.go:141] libmachine: (multinode-602212) Calling .GetSSHPort
	I0916 17:51:24.006567  409178 main.go:141] libmachine: (multinode-602212) Calling .GetSSHKeyPath
	I0916 17:51:24.006708  409178 main.go:141] libmachine: (multinode-602212) Calling .GetSSHUsername
	I0916 17:51:24.006804  409178 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/multinode-602212/id_rsa Username:docker}
	I0916 17:51:24.087145  409178 ssh_runner.go:195] Run: systemctl --version
	I0916 17:51:24.094970  409178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:51:24.108251  409178 kubeconfig.go:125] found "multinode-602212" server: "https://192.168.39.107:8443"
	I0916 17:51:24.108289  409178 api_server.go:166] Checking apiserver status ...
	I0916 17:51:24.108324  409178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 17:51:24.120004  409178 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1906/cgroup
	W0916 17:51:24.128128  409178 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1906/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 17:51:24.128169  409178 ssh_runner.go:195] Run: ls
	I0916 17:51:24.132111  409178 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I0916 17:51:24.137368  409178 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I0916 17:51:24.137385  409178 status.go:422] multinode-602212 apiserver status = Running (err=<nil>)
	I0916 17:51:24.137395  409178 status.go:257] multinode-602212 status: &{Name:multinode-602212 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:51:24.137410  409178 status.go:255] checking status of multinode-602212-m02 ...
	I0916 17:51:24.137714  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:24.137748  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:24.154617  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40111
	I0916 17:51:24.155119  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:24.155592  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:24.155613  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:24.155933  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:24.156161  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .GetState
	I0916 17:51:24.157772  409178 status.go:330] multinode-602212-m02 host status = "Running" (err=<nil>)
	I0916 17:51:24.157793  409178 host.go:66] Checking if "multinode-602212-m02" exists ...
	I0916 17:51:24.158130  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:24.158175  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:24.172926  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
	I0916 17:51:24.173338  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:24.173861  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:24.173882  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:24.174212  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:24.174382  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .GetIP
	I0916 17:51:24.176784  409178 main.go:141] libmachine: (multinode-602212-m02) DBG | domain multinode-602212-m02 has defined MAC address 52:54:00:3a:dc:17 in network mk-multinode-602212
	I0916 17:51:24.177225  409178 main.go:141] libmachine: (multinode-602212-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:dc:17", ip: ""} in network mk-multinode-602212: {Iface:virbr1 ExpiryTime:2024-09-16 18:49:32 +0000 UTC Type:0 Mac:52:54:00:3a:dc:17 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:multinode-602212-m02 Clientid:01:52:54:00:3a:dc:17}
	I0916 17:51:24.177249  409178 main.go:141] libmachine: (multinode-602212-m02) DBG | domain multinode-602212-m02 has defined IP address 192.168.39.25 and MAC address 52:54:00:3a:dc:17 in network mk-multinode-602212
	I0916 17:51:24.177476  409178 host.go:66] Checking if "multinode-602212-m02" exists ...
	I0916 17:51:24.177753  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:24.177790  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:24.192322  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37583
	I0916 17:51:24.192744  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:24.193258  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:24.193281  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:24.193583  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:24.193774  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .DriverName
	I0916 17:51:24.193939  409178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 17:51:24.193967  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .GetSSHHostname
	I0916 17:51:24.196216  409178 main.go:141] libmachine: (multinode-602212-m02) DBG | domain multinode-602212-m02 has defined MAC address 52:54:00:3a:dc:17 in network mk-multinode-602212
	I0916 17:51:24.196537  409178 main.go:141] libmachine: (multinode-602212-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:dc:17", ip: ""} in network mk-multinode-602212: {Iface:virbr1 ExpiryTime:2024-09-16 18:49:32 +0000 UTC Type:0 Mac:52:54:00:3a:dc:17 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:multinode-602212-m02 Clientid:01:52:54:00:3a:dc:17}
	I0916 17:51:24.196566  409178 main.go:141] libmachine: (multinode-602212-m02) DBG | domain multinode-602212-m02 has defined IP address 192.168.39.25 and MAC address 52:54:00:3a:dc:17 in network mk-multinode-602212
	I0916 17:51:24.196714  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .GetSSHPort
	I0916 17:51:24.196876  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .GetSSHKeyPath
	I0916 17:51:24.197012  409178 main.go:141] libmachine: (multinode-602212-m02) Calling .GetSSHUsername
	I0916 17:51:24.197150  409178 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-375661/.minikube/machines/multinode-602212-m02/id_rsa Username:docker}
	I0916 17:51:24.279015  409178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 17:51:24.292242  409178 status.go:257] multinode-602212-m02 status: &{Name:multinode-602212-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:51:24.292267  409178 status.go:255] checking status of multinode-602212-m03 ...
	I0916 17:51:24.292566  409178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:51:24.292602  409178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:51:24.307010  409178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45421
	I0916 17:51:24.307368  409178 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:51:24.307803  409178 main.go:141] libmachine: Using API Version  1
	I0916 17:51:24.307822  409178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:51:24.308126  409178 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:51:24.308309  409178 main.go:141] libmachine: (multinode-602212-m03) Calling .GetState
	I0916 17:51:24.309560  409178 status.go:330] multinode-602212-m03 host status = "Stopped" (err=<nil>)
	I0916 17:51:24.309577  409178 status.go:343] host is not running, skipping remaining checks
	I0916 17:51:24.309586  409178 status.go:257] multinode-602212-m03 status: &{Name:multinode-602212-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-602212 node start m03 -v=7 --alsologtostderr: (40.902747979s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (168.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-602212
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-602212
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-602212: (27.203974343s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602212 --wait=true -v=8 --alsologtostderr
E0916 17:52:45.906939  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 17:53:35.964262  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602212 --wait=true -v=8 --alsologtostderr: (2m21.676108015s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-602212
--- PASS: TestMultiNode/serial/RestartKeepsNodes (168.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-602212 node delete m03: (1.61842587s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-602212 stop: (24.754404425s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602212 status: exit status 7 (82.360562ms)

                                                
                                                
-- stdout --
	multinode-602212
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-602212-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr: exit status 7 (81.56611ms)

                                                
                                                
-- stdout --
	multinode-602212
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-602212-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:55:21.772712  410909 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:55:21.772814  410909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:55:21.772823  410909 out.go:358] Setting ErrFile to fd 2...
	I0916 17:55:21.772827  410909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:55:21.773048  410909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-375661/.minikube/bin
	I0916 17:55:21.773205  410909 out.go:352] Setting JSON to false
	I0916 17:55:21.773231  410909 mustload.go:65] Loading cluster: multinode-602212
	I0916 17:55:21.773283  410909 notify.go:220] Checking for updates...
	I0916 17:55:21.773776  410909 config.go:182] Loaded profile config "multinode-602212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0916 17:55:21.773798  410909 status.go:255] checking status of multinode-602212 ...
	I0916 17:55:21.774338  410909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:55:21.774382  410909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:55:21.793907  410909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0916 17:55:21.794343  410909 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:55:21.794976  410909 main.go:141] libmachine: Using API Version  1
	I0916 17:55:21.795014  410909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:55:21.795355  410909 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:55:21.795530  410909 main.go:141] libmachine: (multinode-602212) Calling .GetState
	I0916 17:55:21.797006  410909 status.go:330] multinode-602212 host status = "Stopped" (err=<nil>)
	I0916 17:55:21.797020  410909 status.go:343] host is not running, skipping remaining checks
	I0916 17:55:21.797026  410909 status.go:257] multinode-602212 status: &{Name:multinode-602212 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 17:55:21.797072  410909 status.go:255] checking status of multinode-602212-m02 ...
	I0916 17:55:21.797351  410909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0916 17:55:21.797391  410909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:55:21.811066  410909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I0916 17:55:21.811481  410909 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:55:21.811924  410909 main.go:141] libmachine: Using API Version  1
	I0916 17:55:21.811947  410909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:55:21.812230  410909 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:55:21.812388  410909 main.go:141] libmachine: (multinode-602212-m02) Calling .GetState
	I0916 17:55:21.813578  410909 status.go:330] multinode-602212-m02 host status = "Stopped" (err=<nil>)
	I0916 17:55:21.813591  410909 status.go:343] host is not running, skipping remaining checks
	I0916 17:55:21.813597  410909 status.go:257] multinode-602212-m02 status: &{Name:multinode-602212-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (111.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602212 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602212 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m51.350673233s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-602212 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (111.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-602212
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602212-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-602212-m02 --driver=kvm2 : exit status 14 (56.626312ms)

                                                
                                                
-- stdout --
	* [multinode-602212-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-602212-m02' is duplicated with machine name 'multinode-602212-m02' in profile 'multinode-602212'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-602212-m03 --driver=kvm2 
E0916 17:57:45.907482  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-602212-m03 --driver=kvm2 : (48.822065207s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-602212
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-602212: exit status 80 (204.323675ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-602212 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-602212-m03 already exists in multinode-602212-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-602212-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-602212-m03: (1.035893153s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.16s)

                                                
                                    
x
+
TestPreload (249.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-313431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0916 17:58:35.964393  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-313431 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m59.735268804s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-313431 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-313431 image pull gcr.io/k8s-minikube/busybox: (2.042543117s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-313431
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-313431: (12.470581151s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-313431 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0916 18:01:39.031477  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-313431 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m54.318457891s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-313431 image list
helpers_test.go:175: Cleaning up "test-preload-313431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-313431
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-313431: (1.047757061s)
--- PASS: TestPreload (249.80s)

                                                
                                    
x
+
TestScheduledStopUnix (119.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-827472 --memory=2048 --driver=kvm2 
E0916 18:02:45.906889  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-827472 --memory=2048 --driver=kvm2 : (48.08869928s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-827472 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-827472 -n scheduled-stop-827472
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-827472 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-827472 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-827472 -n scheduled-stop-827472
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-827472
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-827472 --schedule 15s
E0916 18:03:35.964085  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-827472
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-827472: exit status 7 (65.658212ms)

                                                
                                                
-- stdout --
	scheduled-stop-827472
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-827472 -n scheduled-stop-827472
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-827472 -n scheduled-stop-827472: exit status 7 (65.457495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-827472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-827472
--- PASS: TestScheduledStopUnix (119.59s)

                                                
                                    
x
+
TestSkaffold (125.05s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2854148269 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-779710 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-779710 --memory=2600 --driver=kvm2 : (44.412876762s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2854148269 run --minikube-profile skaffold-779710 --kube-context skaffold-779710 --status-check=true --port-forward=false --interactive=false
E0916 18:05:48.973475  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2854148269 run --minikube-profile skaffold-779710 --kube-context skaffold-779710 --status-check=true --port-forward=false --interactive=false: (1m5.697163667s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5574866c66-6tt6v" [1cfc7da1-3715-4a50-b7b8-1e8590ea3675] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003575625s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-674fc6ddbb-c8gjt" [885b0dbc-7587-4599-a917-6ccf4454c976] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004051518s
helpers_test.go:175: Cleaning up "skaffold-779710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-779710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-779710: (1.195820296s)
--- PASS: TestSkaffold (125.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (196.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3504377229 start -p running-upgrade-450649 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3504377229 start -p running-upgrade-450649 --memory=2200 --vm-driver=kvm2 : (2m6.695736131s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-450649 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0916 18:08:35.964238  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-450649 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m6.435314292s)
helpers_test.go:175: Cleaning up "running-upgrade-450649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-450649
--- PASS: TestRunningBinaryUpgrade (196.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m17.228487251s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-484776
E0916 18:13:51.481573  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-484776: (12.46952786s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-484776 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-484776 status --format={{.Host}}: exit status 7 (75.412582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (47.121005119s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-484776 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (154.811401ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-484776] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-484776
	    minikube start -p kubernetes-upgrade-484776 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4847762 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-484776 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-484776 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (41.676373486s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-484776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-484776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-484776: (1.010423575s)
--- PASS: TestKubernetesUpgrade (179.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.24s)

                                                
                                    
x
+
TestPause/serial/Start (89.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-422955 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-422955 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m29.05190879s)
--- PASS: TestPause/serial/Start (89.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (168.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.922560186 start -p stopped-upgrade-440263 --memory=2200 --vm-driver=kvm2 
E0916 18:07:45.907040  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.922560186 start -p stopped-upgrade-440263 --memory=2200 --vm-driver=kvm2 : (1m42.600270399s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.922560186 -p stopped-upgrade-440263 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.922560186 -p stopped-upgrade-440263 stop: (12.389739241s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-440263 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-440263 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (53.38240931s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (168.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.33s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-422955 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-422955 --alsologtostderr -v=1 --driver=kvm2 : (54.294559438s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (54.33s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-422955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-422955 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-422955 --output=json --layout=cluster: exit status 2 (271.677727ms)

                                                
                                                
-- stdout --
	{"Name":"pause-422955","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-422955","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-422955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-422955 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-422955 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.482741952s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-752233 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-752233 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (65.374183ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-752233] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-375661/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-375661/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (54.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-752233 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-752233 --driver=kvm2 : (53.945893432s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-752233 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (54.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-440263
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (69.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-752233 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-752233 --no-kubernetes --driver=kvm2 : (1m8.308031878s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-752233 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-752233 status -o json: exit status 2 (222.689286ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-752233","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-752233
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (69.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-752233 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-752233 --no-kubernetes --driver=kvm2 : (31.476526273s)
--- PASS: TestNoKubernetes/serial/Start (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-752233 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-752233 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.905393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-752233
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-752233: (2.486810593s)
--- PASS: TestNoKubernetes/serial/Stop (2.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-752233 --driver=kvm2 
E0916 18:11:28.116352  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:11:48.598290  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-752233 --driver=kvm2 : (1m2.305763068s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-752233 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-752233 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.080143ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (207.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-688653 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0916 18:12:45.907434  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-688653 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (3m27.55341253s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (207.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-789638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
E0916 18:13:35.964082  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-789638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m59.686033489s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-626282 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-626282 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m33.836892005s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-789638 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d0fffc5-0adf-402f-9ade-82714f5c8016] Pending
helpers_test.go:344: "busybox" [2d0fffc5-0adf-402f-9ade-82714f5c8016] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d0fffc5-0adf-402f-9ade-82714f5c8016] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00691282s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-789638 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-025614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-025614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m27.307861435s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-789638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0916 18:15:29.170856  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.177248  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.188625  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.210095  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.251650  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.333833  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.501163  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:29.823450  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-789638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063527549s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-789638 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-789638 --alsologtostderr -v=3
E0916 18:15:30.465969  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:31.747946  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:34.310290  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:39.432088  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-789638 --alsologtostderr -v=3: (13.380318314s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-789638 -n no-preload-789638
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-789638 -n no-preload-789638: exit status 7 (82.100984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-789638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (315.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-789638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
E0916 18:15:49.674021  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-789638 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m15.586598076s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-789638 -n no-preload-789638
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (315.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-688653 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0f5c3db-b4b9-4ba9-ae5d-8746d8a89d69] Pending
helpers_test.go:344: "busybox" [d0f5c3db-b4b9-4ba9-ae5d-8746d8a89d69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d0f5c3db-b4b9-4ba9-ae5d-8746d8a89d69] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004744923s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-688653 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-626282 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82815b10-316d-410f-8513-d913e1762e21] Pending
helpers_test.go:344: "busybox" [82815b10-316d-410f-8513-d913e1762e21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0916 18:16:07.621714  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [82815b10-316d-410f-8513-d913e1762e21] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004437655s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-626282 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-688653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-688653 describe deploy/metrics-server -n kube-system
E0916 18:16:10.155879  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-688653 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-688653 --alsologtostderr -v=3: (12.721820441s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-626282 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-626282 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-626282 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-626282 --alsologtostderr -v=3: (13.305609571s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-688653 -n old-k8s-version-688653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-688653 -n old-k8s-version-688653: exit status 7 (80.551487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-688653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (397.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-688653 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-688653 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m36.789505517s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-688653 -n old-k8s-version-688653
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (397.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-626282 -n embed-certs-626282
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-626282 -n embed-certs-626282: exit status 7 (71.752077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-626282 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (366.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-626282 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0916 18:16:35.323865  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:16:51.117820  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-626282 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (6m5.924196221s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-626282 -n embed-certs-626282
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (366.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-025614 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0361ff10-030f-4d02-841f-232aa82e741b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0361ff10-030f-4d02-841f-232aa82e741b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005410237s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-025614 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-025614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-025614 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-025614 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-025614 --alsologtostderr -v=3: (12.592845994s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614: exit status 7 (64.479169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-025614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-025614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0916 18:17:45.907939  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:18:13.039229  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:18:19.033763  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:18:35.964685  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:20:29.170814  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:20:56.881407  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-025614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (4m55.02806936s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (295.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g8rrd" [dc070120-3e14-44b7-a4c1-27acb4526cfe] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g8rrd" [dc070120-3e14-44b7-a4c1-27acb4526cfe] Running
E0916 18:21:07.621853  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004200476s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g8rrd" [dc070120-3e14-44b7-a4c1-27acb4526cfe] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003815883s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-789638 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-789638 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-789638 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-789638 -n no-preload-789638
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-789638 -n no-preload-789638: exit status 2 (230.482824ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-789638 -n no-preload-789638
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-789638 -n no-preload-789638: exit status 2 (233.394492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-789638 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-789638 -n no-preload-789638
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-789638 -n no-preload-789638
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-302997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-302997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (58.104455898s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9tdcw" [14b33bf0-1855-4e1f-817c-23f2f66064df] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003119902s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-302997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-302997 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-302997 --alsologtostderr -v=3: (8.294636165s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9tdcw" [14b33bf0-1855-4e1f-817c-23f2f66064df] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003894683s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-025614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-025614 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-025614 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614: exit status 2 (267.63916ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614: exit status 2 (239.925115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-025614 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-025614 -n default-k8s-diff-port-025614
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-302997 -n newest-cni-302997
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-302997 -n newest-cni-302997: exit status 7 (104.450824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-302997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-302997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-302997 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (40.211297628s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-302997 -n newest-cni-302997
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m30.054440301s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m5clm" [89cbee2f-9313-4dd9-9086-75502efe6b64] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004163168s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m5clm" [89cbee2f-9313-4dd9-9086-75502efe6b64] Running
E0916 18:22:45.907473  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/functional-841551/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00450872s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-626282 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-626282 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-626282 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-626282 -n embed-certs-626282
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-626282 -n embed-certs-626282: exit status 2 (230.434509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-626282 -n embed-certs-626282
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-626282 -n embed-certs-626282: exit status 2 (239.510935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-626282 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-626282 -n embed-certs-626282
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-626282 -n embed-certs-626282
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (114.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m54.36678197s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (114.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-m49kp" [417fb301-c068-49cd-94f1-25720244d2da] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004424147s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-m49kp" [417fb301-c068-49cd-94f1-25720244d2da] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003878384s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-688653 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-302997 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-302997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-302997 -n newest-cni-302997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-302997 -n newest-cni-302997: exit status 2 (224.933855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-302997 -n newest-cni-302997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-302997 -n newest-cni-302997: exit status 2 (227.349176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-302997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-302997 -n newest-cni-302997
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-302997 -n newest-cni-302997
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (122.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m2.968473576s)
--- PASS: TestNetworkPlugins/group/calico/Start (122.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-688653 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-688653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-688653 -n old-k8s-version-688653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-688653 -n old-k8s-version-688653: exit status 2 (225.976613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-688653 -n old-k8s-version-688653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-688653 -n old-k8s-version-688653: exit status 2 (221.788567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-688653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-688653 -n old-k8s-version-688653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-688653 -n old-k8s-version-688653
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.13s)
E0916 18:26:56.142856  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.149216  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.160625  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.181946  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.223285  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.304645  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.466195  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:56.788026  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:57.429948  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:58.711495  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:27:01.273661  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (130.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0916 18:23:35.964906  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/addons-214113/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m10.051556981s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (130.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-959md" [68a2b3f5-2763-4ca8-a7d5-79ace9fca169] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-959md" [68a2b3f5-2763-4ca8-a7d5-79ace9fca169] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004362832s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (108.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m48.133348754s)
--- PASS: TestNetworkPlugins/group/false/Start (108.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wvjkv" [ff3e57ec-5550-45e6-acdf-430fc8ccb0ef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003230297s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d4b4s" [6f37d8e8-d864-4d19-9121-f7c353f258ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d4b4s" [6f37d8e8-d864-4d19-9121-f7c353f258ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004344704s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tdspn" [7ca74930-3909-4afa-bf52-f18a8248ad7c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005839716s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-188474 "pgrep -a kubelet"
E0916 18:25:19.973829  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:19.980291  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:19.991617  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-188474 replace --force -f testdata/netcat-deployment.yaml
E0916 18:25:20.013240  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hhrp7" [f35fa967-7a63-4f11-a104-9f0bc87b94d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 18:25:20.305900  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:20.628041  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hhrp7" [f35fa967-7a63-4f11-a104-9f0bc87b94d7] Running
E0916 18:25:29.171126  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/gvisor-657329/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:30.235100  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004941638s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0916 18:25:21.269553  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:22.551721  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m3.583591476s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-188474 replace --force -f testdata/netcat-deployment.yaml
E0916 18:25:25.113277  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4dppw" [c4781db0-36ed-48b4-956f-06dbc63ceae3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4dppw" [c4781db0-36ed-48b4-956f-06dbc63ceae3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003168151s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m13.125363646s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (118.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0916 18:25:58.181026  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.187482  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.198915  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.220304  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.261739  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.343254  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.504912  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:58.826979  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:25:59.469244  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:00.750660  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:00.958184  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:03.313433  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:07.621991  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/skaffold-779710/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:26:08.435334  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m58.374726063s)
--- PASS: TestNetworkPlugins/group/bridge/Start (118.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nqzkw" [f2c1f447-4e58-4795-99ad-175c7210c3db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0916 18:26:18.676908  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/old-k8s-version-688653/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nqzkw" [f2c1f447-4e58-4795-99ad-175c7210c3db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.00404155s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dqxrv" [9fde21cf-2e36-4124-a0ab-e5048a3c1034] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dqxrv" [9fde21cf-2e36-4124-a0ab-e5048a3c1034] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004488053s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (67.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-188474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m7.70344821s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (67.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fcjkv" [fb3759b3-6a03-4726-864e-e8905cebf181] Running
E0916 18:27:06.396002  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005329174s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rkt8p" [65566acd-792f-4882-b40f-0f057f896d97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rkt8p" [65566acd-792f-4882-b40f-0f057f896d97] Running
E0916 18:27:16.637928  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/default-k8s-diff-port-025614/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005700228s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9td5b" [c70b271f-9c16-4300-a3ef-b8411af73dd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9td5b" [c70b271f-9c16-4300-a3ef-b8411af73dd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.00346392s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-188474 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-188474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d648l" [70b385ed-350b-41f9-9e83-4eee1e56c4b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d648l" [70b385ed-350b-41f9-9e83-4eee1e56c4b3] Running
E0916 18:28:03.841811  382962 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-375661/.minikube/profiles/no-preload-789638/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003319824s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-188474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-188474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-200042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-200042
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-188474 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-188474" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-188474

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-188474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-188474"

                                                
                                                
----------------------- debugLogs end: cilium-188474 [took: 5.304134506s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-188474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-188474
--- SKIP: TestNetworkPlugins/group/cilium (5.51s)

                                                
                                    
Copied to clipboard