Test Report: KVM_Linux 19355

                    
                      6d23947514fd7a389789fed180382829b6444229:2024-08-02:35618
                    
                

Test fail (1/349)

Order failed test Duration
42 TestAddons/parallel/Registry 73.86
x
+
TestAddons/parallel/Registry (73.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.613981ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-rmkd9" [4d303fd6-afaa-4c57-8d55-3ec1c66f6415] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004833523s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lnmnh" [1f0dd687-67d1-4e50-89e4-61b430552e7b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006720448s
addons_test.go:342: (dbg) Run:  kubectl --context addons-723198 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-723198 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.074882576s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-723198 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 ip
2024/08/02 17:33:30 [DEBUG] GET http://192.168.39.195:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-723198 -n addons-723198
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 logs -n 25: (1.127003564s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-751706              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-751706              | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only              | download-only-004188 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-004188              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-004188              | download-only-004188 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only              | download-only-184388 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-184388              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0    |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-184388              | download-only-184388 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-751706              | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-004188              | download-only-004188 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-184388              | download-only-184388 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | --download-only -p                   | binary-mirror-454723 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | binary-mirror-454723                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40911               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-454723              | binary-mirror-454723 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| addons  | disable dashboard -p                 | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | addons-723198                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | addons-723198                        |                      |         |         |                     |                     |
	| start   | -p addons-723198 --wait=true         | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress      |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | addons-723198 addons disable         | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:32 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-723198 addons disable         | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:32 UTC | 02 Aug 24 17:32 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:32 UTC | 02 Aug 24 17:32 UTC |
	|         | addons-723198                        |                      |         |         |                     |                     |
	| addons  | addons-723198 addons                 | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:32 UTC | 02 Aug 24 17:32 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-723198 ip                     | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	| addons  | addons-723198 addons disable         | addons-723198        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:26:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:26:42.278958   13490 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:26:42.279065   13490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:42.279075   13490 out.go:304] Setting ErrFile to fd 2...
	I0802 17:26:42.279079   13490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:42.279253   13490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:26:42.279833   13490 out.go:298] Setting JSON to false
	I0802 17:26:42.280700   13490 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":551,"bootTime":1722619051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:26:42.280753   13490 start.go:139] virtualization: kvm guest
	I0802 17:26:42.282959   13490 out.go:177] * [addons-723198] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:26:42.284223   13490 notify.go:220] Checking for updates...
	I0802 17:26:42.284240   13490 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:26:42.285720   13490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:26:42.287097   13490 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:26:42.288555   13490 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:26:42.289896   13490 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:26:42.291361   13490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:26:42.292766   13490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:26:42.324280   13490 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 17:26:42.325459   13490 start.go:297] selected driver: kvm2
	I0802 17:26:42.325475   13490 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:26:42.325488   13490 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:26:42.326192   13490 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:42.326270   13490 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5398/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:26:42.341234   13490 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:26:42.341299   13490 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:26:42.341509   13490 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:26:42.341574   13490 cni.go:84] Creating CNI manager for ""
	I0802 17:26:42.341601   13490 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 17:26:42.341614   13490 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:26:42.341686   13490 start.go:340] cluster config:
	{Name:addons-723198 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-723198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:26:42.341791   13490 iso.go:125] acquiring lock: {Name:mk60a609c45f45520dec0098fa54c9404c4e9236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:42.343626   13490 out.go:177] * Starting "addons-723198" primary control-plane node in "addons-723198" cluster
	I0802 17:26:42.344695   13490 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 17:26:42.344744   13490 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0802 17:26:42.344754   13490 cache.go:56] Caching tarball of preloaded images
	I0802 17:26:42.344850   13490 preload.go:172] Found /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0802 17:26:42.344860   13490 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0802 17:26:42.345126   13490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/config.json ...
	I0802 17:26:42.345155   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/config.json: {Name:mk2098416db1572417b6b7c5169045b6c6b7f27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:26:42.345272   13490 start.go:360] acquireMachinesLock for addons-723198: {Name:mke9b31bfbb36f01adc9168f4ab862314232125a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:26:42.345321   13490 start.go:364] duration metric: took 37.542µs to acquireMachinesLock for "addons-723198"
	I0802 17:26:42.345337   13490 start.go:93] Provisioning new machine with config: &{Name:addons-723198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-723198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 17:26:42.345390   13490 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 17:26:42.346748   13490 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0802 17:26:42.346916   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:26:42.346962   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:26:42.361016   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0802 17:26:42.361424   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:26:42.361984   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:26:42.362008   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:26:42.362324   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:26:42.362506   13490 main.go:141] libmachine: (addons-723198) Calling .GetMachineName
	I0802 17:26:42.362653   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:26:42.362780   13490 start.go:159] libmachine.API.Create for "addons-723198" (driver="kvm2")
	I0802 17:26:42.362823   13490 client.go:168] LocalClient.Create starting
	I0802 17:26:42.362864   13490 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca.pem
	I0802 17:26:42.422725   13490 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/cert.pem
	I0802 17:26:42.530255   13490 main.go:141] libmachine: Running pre-create checks...
	I0802 17:26:42.530276   13490 main.go:141] libmachine: (addons-723198) Calling .PreCreateCheck
	I0802 17:26:42.530741   13490 main.go:141] libmachine: (addons-723198) Calling .GetConfigRaw
	I0802 17:26:42.531216   13490 main.go:141] libmachine: Creating machine...
	I0802 17:26:42.531231   13490 main.go:141] libmachine: (addons-723198) Calling .Create
	I0802 17:26:42.531361   13490 main.go:141] libmachine: (addons-723198) Creating KVM machine...
	I0802 17:26:42.532730   13490 main.go:141] libmachine: (addons-723198) DBG | found existing default KVM network
	I0802 17:26:42.533492   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:42.533356   13512 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0802 17:26:42.533519   13490 main.go:141] libmachine: (addons-723198) DBG | created network xml: 
	I0802 17:26:42.533534   13490 main.go:141] libmachine: (addons-723198) DBG | <network>
	I0802 17:26:42.533542   13490 main.go:141] libmachine: (addons-723198) DBG |   <name>mk-addons-723198</name>
	I0802 17:26:42.533550   13490 main.go:141] libmachine: (addons-723198) DBG |   <dns enable='no'/>
	I0802 17:26:42.533556   13490 main.go:141] libmachine: (addons-723198) DBG |   
	I0802 17:26:42.533566   13490 main.go:141] libmachine: (addons-723198) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 17:26:42.533573   13490 main.go:141] libmachine: (addons-723198) DBG |     <dhcp>
	I0802 17:26:42.533582   13490 main.go:141] libmachine: (addons-723198) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 17:26:42.533587   13490 main.go:141] libmachine: (addons-723198) DBG |     </dhcp>
	I0802 17:26:42.533592   13490 main.go:141] libmachine: (addons-723198) DBG |   </ip>
	I0802 17:26:42.533600   13490 main.go:141] libmachine: (addons-723198) DBG |   
	I0802 17:26:42.533605   13490 main.go:141] libmachine: (addons-723198) DBG | </network>
	I0802 17:26:42.533609   13490 main.go:141] libmachine: (addons-723198) DBG | 
	I0802 17:26:42.538629   13490 main.go:141] libmachine: (addons-723198) DBG | trying to create private KVM network mk-addons-723198 192.168.39.0/24...
	I0802 17:26:42.602190   13490 main.go:141] libmachine: (addons-723198) DBG | private KVM network mk-addons-723198 192.168.39.0/24 created
	I0802 17:26:42.602220   13490 main.go:141] libmachine: (addons-723198) Setting up store path in /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198 ...
	I0802 17:26:42.602242   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:42.602157   13512 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:26:42.602264   13490 main.go:141] libmachine: (addons-723198) Building disk image from file:///home/jenkins/minikube-integration/19355-5398/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:26:42.602285   13490 main.go:141] libmachine: (addons-723198) Downloading /home/jenkins/minikube-integration/19355-5398/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5398/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:26:42.852803   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:42.852647   13512 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa...
	I0802 17:26:43.048098   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:43.047918   13512 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/addons-723198.rawdisk...
	I0802 17:26:43.048142   13490 main.go:141] libmachine: (addons-723198) DBG | Writing magic tar header
	I0802 17:26:43.048160   13490 main.go:141] libmachine: (addons-723198) DBG | Writing SSH key tar header
	I0802 17:26:43.048173   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:43.048073   13512 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198 ...
	I0802 17:26:43.048205   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198
	I0802 17:26:43.048260   13490 main.go:141] libmachine: (addons-723198) Setting executable bit set on /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198 (perms=drwx------)
	I0802 17:26:43.048289   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5398/.minikube/machines
	I0802 17:26:43.048303   13490 main.go:141] libmachine: (addons-723198) Setting executable bit set on /home/jenkins/minikube-integration/19355-5398/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:26:43.048314   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:26:43.048335   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5398
	I0802 17:26:43.048344   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:26:43.048354   13490 main.go:141] libmachine: (addons-723198) Setting executable bit set on /home/jenkins/minikube-integration/19355-5398/.minikube (perms=drwxr-xr-x)
	I0802 17:26:43.048363   13490 main.go:141] libmachine: (addons-723198) Setting executable bit set on /home/jenkins/minikube-integration/19355-5398 (perms=drwxrwxr-x)
	I0802 17:26:43.048370   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:26:43.048384   13490 main.go:141] libmachine: (addons-723198) DBG | Checking permissions on dir: /home
	I0802 17:26:43.048402   13490 main.go:141] libmachine: (addons-723198) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:26:43.048414   13490 main.go:141] libmachine: (addons-723198) DBG | Skipping /home - not owner
	I0802 17:26:43.048428   13490 main.go:141] libmachine: (addons-723198) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:26:43.048438   13490 main.go:141] libmachine: (addons-723198) Creating domain...
	I0802 17:26:43.049301   13490 main.go:141] libmachine: (addons-723198) define libvirt domain using xml: 
	I0802 17:26:43.049352   13490 main.go:141] libmachine: (addons-723198) <domain type='kvm'>
	I0802 17:26:43.049364   13490 main.go:141] libmachine: (addons-723198)   <name>addons-723198</name>
	I0802 17:26:43.049372   13490 main.go:141] libmachine: (addons-723198)   <memory unit='MiB'>4000</memory>
	I0802 17:26:43.049378   13490 main.go:141] libmachine: (addons-723198)   <vcpu>2</vcpu>
	I0802 17:26:43.049385   13490 main.go:141] libmachine: (addons-723198)   <features>
	I0802 17:26:43.049391   13490 main.go:141] libmachine: (addons-723198)     <acpi/>
	I0802 17:26:43.049397   13490 main.go:141] libmachine: (addons-723198)     <apic/>
	I0802 17:26:43.049402   13490 main.go:141] libmachine: (addons-723198)     <pae/>
	I0802 17:26:43.049409   13490 main.go:141] libmachine: (addons-723198)     
	I0802 17:26:43.049414   13490 main.go:141] libmachine: (addons-723198)   </features>
	I0802 17:26:43.049421   13490 main.go:141] libmachine: (addons-723198)   <cpu mode='host-passthrough'>
	I0802 17:26:43.049427   13490 main.go:141] libmachine: (addons-723198)   
	I0802 17:26:43.049434   13490 main.go:141] libmachine: (addons-723198)   </cpu>
	I0802 17:26:43.049439   13490 main.go:141] libmachine: (addons-723198)   <os>
	I0802 17:26:43.049446   13490 main.go:141] libmachine: (addons-723198)     <type>hvm</type>
	I0802 17:26:43.049470   13490 main.go:141] libmachine: (addons-723198)     <boot dev='cdrom'/>
	I0802 17:26:43.049490   13490 main.go:141] libmachine: (addons-723198)     <boot dev='hd'/>
	I0802 17:26:43.049502   13490 main.go:141] libmachine: (addons-723198)     <bootmenu enable='no'/>
	I0802 17:26:43.049517   13490 main.go:141] libmachine: (addons-723198)   </os>
	I0802 17:26:43.049524   13490 main.go:141] libmachine: (addons-723198)   <devices>
	I0802 17:26:43.049530   13490 main.go:141] libmachine: (addons-723198)     <disk type='file' device='cdrom'>
	I0802 17:26:43.049546   13490 main.go:141] libmachine: (addons-723198)       <source file='/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/boot2docker.iso'/>
	I0802 17:26:43.049554   13490 main.go:141] libmachine: (addons-723198)       <target dev='hdc' bus='scsi'/>
	I0802 17:26:43.049561   13490 main.go:141] libmachine: (addons-723198)       <readonly/>
	I0802 17:26:43.049567   13490 main.go:141] libmachine: (addons-723198)     </disk>
	I0802 17:26:43.049575   13490 main.go:141] libmachine: (addons-723198)     <disk type='file' device='disk'>
	I0802 17:26:43.049584   13490 main.go:141] libmachine: (addons-723198)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:26:43.049616   13490 main.go:141] libmachine: (addons-723198)       <source file='/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/addons-723198.rawdisk'/>
	I0802 17:26:43.049639   13490 main.go:141] libmachine: (addons-723198)       <target dev='hda' bus='virtio'/>
	I0802 17:26:43.049652   13490 main.go:141] libmachine: (addons-723198)     </disk>
	I0802 17:26:43.049662   13490 main.go:141] libmachine: (addons-723198)     <interface type='network'>
	I0802 17:26:43.049676   13490 main.go:141] libmachine: (addons-723198)       <source network='mk-addons-723198'/>
	I0802 17:26:43.049687   13490 main.go:141] libmachine: (addons-723198)       <model type='virtio'/>
	I0802 17:26:43.049698   13490 main.go:141] libmachine: (addons-723198)     </interface>
	I0802 17:26:43.049710   13490 main.go:141] libmachine: (addons-723198)     <interface type='network'>
	I0802 17:26:43.049734   13490 main.go:141] libmachine: (addons-723198)       <source network='default'/>
	I0802 17:26:43.049751   13490 main.go:141] libmachine: (addons-723198)       <model type='virtio'/>
	I0802 17:26:43.049757   13490 main.go:141] libmachine: (addons-723198)     </interface>
	I0802 17:26:43.049763   13490 main.go:141] libmachine: (addons-723198)     <serial type='pty'>
	I0802 17:26:43.049780   13490 main.go:141] libmachine: (addons-723198)       <target port='0'/>
	I0802 17:26:43.049791   13490 main.go:141] libmachine: (addons-723198)     </serial>
	I0802 17:26:43.049801   13490 main.go:141] libmachine: (addons-723198)     <console type='pty'>
	I0802 17:26:43.049827   13490 main.go:141] libmachine: (addons-723198)       <target type='serial' port='0'/>
	I0802 17:26:43.049849   13490 main.go:141] libmachine: (addons-723198)     </console>
	I0802 17:26:43.049860   13490 main.go:141] libmachine: (addons-723198)     <rng model='virtio'>
	I0802 17:26:43.049874   13490 main.go:141] libmachine: (addons-723198)       <backend model='random'>/dev/random</backend>
	I0802 17:26:43.049888   13490 main.go:141] libmachine: (addons-723198)     </rng>
	I0802 17:26:43.049901   13490 main.go:141] libmachine: (addons-723198)     
	I0802 17:26:43.049912   13490 main.go:141] libmachine: (addons-723198)     
	I0802 17:26:43.049918   13490 main.go:141] libmachine: (addons-723198)   </devices>
	I0802 17:26:43.049928   13490 main.go:141] libmachine: (addons-723198) </domain>
	I0802 17:26:43.049936   13490 main.go:141] libmachine: (addons-723198) 
	I0802 17:26:43.056540   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:e6:df:07 in network default
	I0802 17:26:43.057115   13490 main.go:141] libmachine: (addons-723198) Ensuring networks are active...
	I0802 17:26:43.057136   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:43.057830   13490 main.go:141] libmachine: (addons-723198) Ensuring network default is active
	I0802 17:26:43.058094   13490 main.go:141] libmachine: (addons-723198) Ensuring network mk-addons-723198 is active
	I0802 17:26:43.058551   13490 main.go:141] libmachine: (addons-723198) Getting domain xml...
	I0802 17:26:43.059167   13490 main.go:141] libmachine: (addons-723198) Creating domain...
	I0802 17:26:44.482535   13490 main.go:141] libmachine: (addons-723198) Waiting to get IP...
	I0802 17:26:44.483257   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:44.483712   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:44.483740   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:44.483707   13512 retry.go:31] will retry after 302.018385ms: waiting for machine to come up
	I0802 17:26:44.787246   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:44.787661   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:44.787699   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:44.787633   13512 retry.go:31] will retry after 383.819595ms: waiting for machine to come up
	I0802 17:26:45.172966   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:45.173386   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:45.173416   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:45.173346   13512 retry.go:31] will retry after 432.612274ms: waiting for machine to come up
	I0802 17:26:45.607935   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:45.608335   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:45.608371   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:45.608288   13512 retry.go:31] will retry after 491.937093ms: waiting for machine to come up
	I0802 17:26:46.101976   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:46.102339   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:46.102381   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:46.102276   13512 retry.go:31] will retry after 578.417609ms: waiting for machine to come up
	I0802 17:26:46.681900   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:46.682205   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:46.682231   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:46.682174   13512 retry.go:31] will retry after 656.349731ms: waiting for machine to come up
	I0802 17:26:47.339913   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:47.340328   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:47.340353   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:47.340285   13512 retry.go:31] will retry after 974.448097ms: waiting for machine to come up
	I0802 17:26:48.316037   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:48.316509   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:48.316533   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:48.316464   13512 retry.go:31] will retry after 1.160859728s: waiting for machine to come up
	I0802 17:26:49.478839   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:49.479313   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:49.479342   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:49.479265   13512 retry.go:31] will retry after 1.335200148s: waiting for machine to come up
	I0802 17:26:50.816679   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:50.817161   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:50.817182   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:50.817111   13512 retry.go:31] will retry after 1.58745919s: waiting for machine to come up
	I0802 17:26:52.406554   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:52.406916   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:52.406943   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:52.406894   13512 retry.go:31] will retry after 2.151649797s: waiting for machine to come up
	I0802 17:26:54.561137   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:54.561524   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:54.561551   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:54.561491   13512 retry.go:31] will retry after 2.648689562s: waiting for machine to come up
	I0802 17:26:57.213210   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:26:57.213565   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:26:57.213587   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:26:57.213526   13512 retry.go:31] will retry after 3.517030947s: waiting for machine to come up
	I0802 17:27:00.734703   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:00.735154   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find current IP address of domain addons-723198 in network mk-addons-723198
	I0802 17:27:00.735175   13490 main.go:141] libmachine: (addons-723198) DBG | I0802 17:27:00.735122   13512 retry.go:31] will retry after 3.934621468s: waiting for machine to come up
	I0802 17:27:04.673063   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:04.673726   13490 main.go:141] libmachine: (addons-723198) Found IP for machine: 192.168.39.195
	I0802 17:27:04.673747   13490 main.go:141] libmachine: (addons-723198) Reserving static IP address...
	I0802 17:27:04.673761   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has current primary IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:04.674153   13490 main.go:141] libmachine: (addons-723198) DBG | unable to find host DHCP lease matching {name: "addons-723198", mac: "52:54:00:6f:78:5c", ip: "192.168.39.195"} in network mk-addons-723198
	I0802 17:27:04.747108   13490 main.go:141] libmachine: (addons-723198) DBG | Getting to WaitForSSH function...
	I0802 17:27:04.747140   13490 main.go:141] libmachine: (addons-723198) Reserved static IP address: 192.168.39.195
	I0802 17:27:04.747176   13490 main.go:141] libmachine: (addons-723198) Waiting for SSH to be available...
	I0802 17:27:04.749101   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:04.749570   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:04.749595   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:04.749722   13490 main.go:141] libmachine: (addons-723198) DBG | Using SSH client type: external
	I0802 17:27:04.749748   13490 main.go:141] libmachine: (addons-723198) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa (-rw-------)
	I0802 17:27:04.749784   13490 main.go:141] libmachine: (addons-723198) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:27:04.749799   13490 main.go:141] libmachine: (addons-723198) DBG | About to run SSH command:
	I0802 17:27:04.749813   13490 main.go:141] libmachine: (addons-723198) DBG | exit 0
	I0802 17:27:04.887055   13490 main.go:141] libmachine: (addons-723198) DBG | SSH cmd err, output: <nil>: 
	I0802 17:27:04.887315   13490 main.go:141] libmachine: (addons-723198) KVM machine creation complete!
	I0802 17:27:04.887790   13490 main.go:141] libmachine: (addons-723198) Calling .GetConfigRaw
	I0802 17:27:04.888321   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:04.888549   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:04.888724   13490 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:27:04.888738   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:04.890152   13490 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:27:04.890172   13490 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:27:04.890180   13490 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:27:04.890204   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:04.892549   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:04.892935   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:04.892961   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:04.893062   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:04.893246   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:04.893451   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:04.893614   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:04.893771   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:04.893993   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:04.894006   13490 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:27:05.002356   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:27:05.002376   13490 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:27:05.002383   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.005020   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.005338   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.005365   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.005488   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.005675   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.005829   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.005986   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.006151   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:05.006303   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:05.006315   13490 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:27:05.115591   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:27:05.115666   13490 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:27:05.115677   13490 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:27:05.115684   13490 main.go:141] libmachine: (addons-723198) Calling .GetMachineName
	I0802 17:27:05.115934   13490 buildroot.go:166] provisioning hostname "addons-723198"
	I0802 17:27:05.115961   13490 main.go:141] libmachine: (addons-723198) Calling .GetMachineName
	I0802 17:27:05.116186   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.119566   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.119918   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.119940   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.120170   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.120352   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.120493   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.120619   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.120788   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:05.120955   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:05.120967   13490 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-723198 && echo "addons-723198" | sudo tee /etc/hostname
	I0802 17:27:05.244633   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-723198
	
	I0802 17:27:05.244662   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.247205   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.247500   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.247518   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.247786   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.247966   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.248135   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.248282   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.248434   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:05.248618   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:05.248640   13490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-723198' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-723198/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-723198' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:27:05.367130   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:27:05.367164   13490 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5398/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5398/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5398/.minikube}
	I0802 17:27:05.367193   13490 buildroot.go:174] setting up certificates
	I0802 17:27:05.367202   13490 provision.go:84] configureAuth start
	I0802 17:27:05.367210   13490 main.go:141] libmachine: (addons-723198) Calling .GetMachineName
	I0802 17:27:05.367516   13490 main.go:141] libmachine: (addons-723198) Calling .GetIP
	I0802 17:27:05.370323   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.370758   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.370803   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.370908   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.373014   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.373338   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.373366   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.373490   13490 provision.go:143] copyHostCerts
	I0802 17:27:05.373562   13490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5398/.minikube/ca.pem (1078 bytes)
	I0802 17:27:05.373685   13490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5398/.minikube/cert.pem (1123 bytes)
	I0802 17:27:05.373774   13490 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5398/.minikube/key.pem (1679 bytes)
	I0802 17:27:05.373834   13490 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5398/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca-key.pem org=jenkins.addons-723198 san=[127.0.0.1 192.168.39.195 addons-723198 localhost minikube]
	I0802 17:27:05.444324   13490 provision.go:177] copyRemoteCerts
	I0802 17:27:05.444379   13490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:27:05.444399   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.446855   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.447172   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.447214   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.447408   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.447592   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.447740   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.447910   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:05.532305   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:27:05.555201   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:27:05.577310   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:27:05.599421   13490 provision.go:87] duration metric: took 232.207168ms to configureAuth
	I0802 17:27:05.599452   13490 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:27:05.599656   13490 config.go:182] Loaded profile config "addons-723198": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:27:05.599686   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:05.599944   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.602574   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.602934   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.602960   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.603139   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.603314   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.603474   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.603583   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.603696   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:05.603886   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:05.603902   13490 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0802 17:27:05.712258   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0802 17:27:05.712291   13490 buildroot.go:70] root file system type: tmpfs
	I0802 17:27:05.712430   13490 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0802 17:27:05.712466   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.714984   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.715337   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.715374   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.715522   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.715717   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.715867   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.716017   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.716179   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:05.716332   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:05.716390   13490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0802 17:27:05.835730   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0802 17:27:05.835789   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:05.838542   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.838861   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:05.838884   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:05.839149   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:05.839364   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.839530   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:05.839698   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:05.839848   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:05.840009   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:05.840026   13490 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0802 17:27:07.581196   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0802 17:27:07.581220   13490 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:27:07.581228   13490 main.go:141] libmachine: (addons-723198) Calling .GetURL
	I0802 17:27:07.582416   13490 main.go:141] libmachine: (addons-723198) DBG | Using libvirt version 6000000
	I0802 17:27:07.584732   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.585045   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.585077   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.585221   13490 main.go:141] libmachine: Docker is up and running!
	I0802 17:27:07.585240   13490 main.go:141] libmachine: Reticulating splines...
	I0802 17:27:07.585248   13490 client.go:171] duration metric: took 25.222413845s to LocalClient.Create
	I0802 17:27:07.585275   13490 start.go:167] duration metric: took 25.222495036s to libmachine.API.Create "addons-723198"
	I0802 17:27:07.585288   13490 start.go:293] postStartSetup for "addons-723198" (driver="kvm2")
	I0802 17:27:07.585300   13490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:27:07.585323   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:07.585572   13490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:27:07.585592   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:07.587823   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.588161   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.588183   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.588321   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:07.588506   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:07.588652   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:07.588797   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:07.674528   13490 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:27:07.678756   13490 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:27:07.678779   13490 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5398/.minikube/addons for local assets ...
	I0802 17:27:07.678868   13490 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5398/.minikube/files for local assets ...
	I0802 17:27:07.678901   13490 start.go:296] duration metric: took 93.605648ms for postStartSetup
	I0802 17:27:07.678943   13490 main.go:141] libmachine: (addons-723198) Calling .GetConfigRaw
	I0802 17:27:07.679490   13490 main.go:141] libmachine: (addons-723198) Calling .GetIP
	I0802 17:27:07.681817   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.682103   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.682132   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.682344   13490 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/config.json ...
	I0802 17:27:07.682514   13490 start.go:128] duration metric: took 25.337115213s to createHost
	I0802 17:27:07.682532   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:07.684773   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.685081   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.685105   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.685261   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:07.685453   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:07.685598   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:07.685715   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:07.685860   13490 main.go:141] libmachine: Using SSH client type: native
	I0802 17:27:07.686052   13490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0802 17:27:07.686064   13490 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:27:07.795111   13490 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722619627.775479440
	
	I0802 17:27:07.795134   13490 fix.go:216] guest clock: 1722619627.775479440
	I0802 17:27:07.795143   13490 fix.go:229] Guest: 2024-08-02 17:27:07.77547944 +0000 UTC Remote: 2024-08-02 17:27:07.682523731 +0000 UTC m=+25.437420211 (delta=92.955709ms)
	I0802 17:27:07.795195   13490 fix.go:200] guest clock delta is within tolerance: 92.955709ms
	I0802 17:27:07.795203   13490 start.go:83] releasing machines lock for "addons-723198", held for 25.449871163s
	I0802 17:27:07.795228   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:07.795524   13490 main.go:141] libmachine: (addons-723198) Calling .GetIP
	I0802 17:27:07.797908   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.798253   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.798278   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.798414   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:07.798946   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:07.799127   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:07.799311   13490 ssh_runner.go:195] Run: cat /version.json
	I0802 17:27:07.799320   13490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:27:07.799338   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:07.799365   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:07.801581   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.801959   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.801987   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.802010   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.802096   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:07.802285   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:07.802367   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:07.802390   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:07.802432   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:07.802543   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:07.802587   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:07.802677   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:07.802824   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:07.802946   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:07.879072   13490 ssh_runner.go:195] Run: systemctl --version
	I0802 17:27:07.904238   13490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:27:07.909512   13490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:27:07.909576   13490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:27:07.925055   13490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:27:07.925086   13490 start.go:495] detecting cgroup driver to use...
	I0802 17:27:07.925186   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:27:07.942395   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0802 17:27:07.952556   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0802 17:27:07.962736   13490 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0802 17:27:07.962814   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0802 17:27:07.972914   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 17:27:07.982974   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0802 17:27:07.992793   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0802 17:27:08.002543   13490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:27:08.012273   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0802 17:27:08.021858   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0802 17:27:08.031703   13490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0802 17:27:08.042160   13490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:27:08.051298   13490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:27:08.060406   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:08.177288   13490 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0802 17:27:08.199573   13490 start.go:495] detecting cgroup driver to use...
	I0802 17:27:08.199649   13490 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0802 17:27:08.214223   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:27:08.227079   13490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:27:08.243489   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:27:08.257074   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 17:27:08.270633   13490 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0802 17:27:08.299758   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0802 17:27:08.313309   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:27:08.330886   13490 ssh_runner.go:195] Run: which cri-dockerd
	I0802 17:27:08.334403   13490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0802 17:27:08.342702   13490 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0802 17:27:08.358287   13490 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0802 17:27:08.465586   13490 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0802 17:27:08.575236   13490 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0802 17:27:08.575365   13490 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0802 17:27:08.591409   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:08.702172   13490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 17:27:11.031306   13490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329094274s)
	I0802 17:27:11.031374   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0802 17:27:11.043902   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 17:27:11.056210   13490 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0802 17:27:11.166338   13490 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0802 17:27:11.276996   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:11.388021   13490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0802 17:27:11.404757   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0802 17:27:11.417472   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:11.521469   13490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0802 17:27:11.589641   13490 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0802 17:27:11.589733   13490 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0802 17:27:11.595063   13490 start.go:563] Will wait 60s for crictl version
	I0802 17:27:11.595120   13490 ssh_runner.go:195] Run: which crictl
	I0802 17:27:11.598577   13490 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:27:11.632693   13490 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0802 17:27:11.632769   13490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 17:27:11.658037   13490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0802 17:27:11.682344   13490 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0802 17:27:11.682383   13490 main.go:141] libmachine: (addons-723198) Calling .GetIP
	I0802 17:27:11.684879   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:11.685187   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:11.685214   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:11.685430   13490 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:27:11.689066   13490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:27:11.700515   13490 kubeadm.go:883] updating cluster {Name:addons-723198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-723198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:27:11.700633   13490 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0802 17:27:11.700690   13490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 17:27:11.714862   13490 docker.go:685] Got preloaded images: 
	I0802 17:27:11.714883   13490 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0802 17:27:11.714934   13490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 17:27:11.723744   13490 ssh_runner.go:195] Run: which lz4
	I0802 17:27:11.727113   13490 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 17:27:11.730604   13490 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 17:27:11.730627   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0802 17:27:12.861665   13490 docker.go:649] duration metric: took 1.134574529s to copy over tarball
	I0802 17:27:12.861737   13490 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 17:27:14.819659   13490 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.957898462s)
	I0802 17:27:14.819684   13490 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 17:27:14.853164   13490 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0802 17:27:14.862494   13490 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0802 17:27:14.878210   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:14.986040   13490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0802 17:27:19.334528   13490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.348447424s)
	I0802 17:27:19.334622   13490 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0802 17:27:19.352020   13490 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0802 17:27:19.352042   13490 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:27:19.352054   13490 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.30.3 docker true true} ...
	I0802 17:27:19.352159   13490 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-723198 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-723198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:27:19.352209   13490 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0802 17:27:19.403074   13490 cni.go:84] Creating CNI manager for ""
	I0802 17:27:19.403099   13490 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 17:27:19.403108   13490 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:27:19.403125   13490 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-723198 NodeName:addons-723198 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:27:19.403247   13490 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-723198"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:27:19.403300   13490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:27:19.412422   13490 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:27:19.412491   13490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 17:27:19.421103   13490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0802 17:27:19.436476   13490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:27:19.451382   13490 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0802 17:27:19.467426   13490 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0802 17:27:19.470981   13490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:27:19.482121   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:19.598251   13490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:27:19.619308   13490 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198 for IP: 192.168.39.195
	I0802 17:27:19.619332   13490 certs.go:194] generating shared ca certs ...
	I0802 17:27:19.619346   13490 certs.go:226] acquiring lock for ca certs: {Name:mkc60d37e2cd2468cc5859fd73fda4e7312391c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:19.619495   13490 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5398/.minikube/ca.key
	I0802 17:27:19.764826   13490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5398/.minikube/ca.crt ...
	I0802 17:27:19.764854   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/ca.crt: {Name:mka5f821156d8ca1579890f3dd8a9848b9eb0a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:19.765017   13490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5398/.minikube/ca.key ...
	I0802 17:27:19.765028   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/ca.key: {Name:mkd03f41b4278f1488309193906709736a67e952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:19.765099   13490 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.key
	I0802 17:27:19.947923   13490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.crt ...
	I0802 17:27:19.947951   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.crt: {Name:mk59421d2ba41f0af0fb1561f571aacbf6e6de8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:19.948106   13490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.key ...
	I0802 17:27:19.948116   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.key: {Name:mk237782faf918102be329d57626dbd2dbf9985c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:19.948181   13490 certs.go:256] generating profile certs ...
	I0802 17:27:19.948238   13490 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.key
	I0802 17:27:19.948252   13490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt with IP's: []
	I0802 17:27:20.216886   13490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt ...
	I0802 17:27:20.216917   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: {Name:mk3006b962478a892210cedda93c3a2e6810e941 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:20.217071   13490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.key ...
	I0802 17:27:20.217081   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.key: {Name:mk070ad4af67ede1afb501334cf17438dbcf6f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:20.217148   13490 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.key.29986c86
	I0802 17:27:20.217172   13490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.crt.29986c86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0802 17:27:20.287769   13490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.crt.29986c86 ...
	I0802 17:27:20.287804   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.crt.29986c86: {Name:mk33e3471ad640ad4ff626c4ffadb6a623f286c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:20.287987   13490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.key.29986c86 ...
	I0802 17:27:20.288004   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.key.29986c86: {Name:mkaddda612268482dee5e8dd5cd808946cd8917f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:20.288101   13490 certs.go:381] copying /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.crt.29986c86 -> /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.crt
	I0802 17:27:20.288197   13490 certs.go:385] copying /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.key.29986c86 -> /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.key
	I0802 17:27:20.288266   13490 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.key
	I0802 17:27:20.288291   13490 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.crt with IP's: []
	I0802 17:27:20.443801   13490 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.crt ...
	I0802 17:27:20.443831   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.crt: {Name:mkb94fc7999f21ef786454a5b2d0f90f980f03ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:20.444006   13490 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.key ...
	I0802 17:27:20.444020   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.key: {Name:mkc988b7747b1c9dfc22b80cc427a2e79066e365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:20.444210   13490 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:27:20.444253   13490 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:27:20.444291   13490 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:27:20.444325   13490 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5398/.minikube/certs/key.pem (1679 bytes)
	I0802 17:27:20.444868   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:27:20.470966   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0802 17:27:20.493996   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:27:20.516018   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:27:20.538450   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 17:27:20.561043   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:27:20.583306   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:27:20.605366   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:27:20.627751   13490 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5398/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:27:20.649386   13490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:27:20.664888   13490 ssh_runner.go:195] Run: openssl version
	I0802 17:27:20.670196   13490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:27:20.680760   13490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:27:20.685214   13490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:27 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:27:20.685272   13490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:27:20.690868   13490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:27:20.701479   13490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:27:20.705411   13490 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:27:20.705466   13490 kubeadm.go:392] StartCluster: {Name:addons-723198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-723198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:27:20.705598   13490 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0802 17:27:20.722044   13490 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 17:27:20.731645   13490 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 17:27:20.741192   13490 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 17:27:20.750325   13490 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 17:27:20.750348   13490 kubeadm.go:157] found existing configuration files:
	
	I0802 17:27:20.750386   13490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 17:27:20.758759   13490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 17:27:20.758830   13490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 17:27:20.767602   13490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 17:27:20.775820   13490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 17:27:20.775875   13490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 17:27:20.784911   13490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 17:27:20.793773   13490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 17:27:20.793823   13490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 17:27:20.802677   13490 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 17:27:20.811332   13490 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 17:27:20.811384   13490 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 17:27:20.820516   13490 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 17:27:21.004269   13490 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 17:27:30.868138   13490 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 17:27:30.868199   13490 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 17:27:30.868277   13490 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 17:27:30.868373   13490 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 17:27:30.868483   13490 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 17:27:30.868593   13490 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 17:27:30.870402   13490 out.go:204]   - Generating certificates and keys ...
	I0802 17:27:30.870486   13490 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 17:27:30.870552   13490 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 17:27:30.870638   13490 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 17:27:30.870731   13490 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 17:27:30.870831   13490 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 17:27:30.870898   13490 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 17:27:30.870969   13490 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 17:27:30.871141   13490 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-723198 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0802 17:27:30.871222   13490 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 17:27:30.871386   13490 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-723198 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0802 17:27:30.871486   13490 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 17:27:30.871572   13490 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 17:27:30.871632   13490 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 17:27:30.871708   13490 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 17:27:30.871782   13490 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 17:27:30.871860   13490 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 17:27:30.871930   13490 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 17:27:30.872012   13490 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 17:27:30.872062   13490 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 17:27:30.872133   13490 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 17:27:30.872196   13490 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 17:27:30.874083   13490 out.go:204]   - Booting up control plane ...
	I0802 17:27:30.874175   13490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 17:27:30.874244   13490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 17:27:30.874303   13490 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 17:27:30.874395   13490 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 17:27:30.874486   13490 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 17:27:30.874530   13490 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 17:27:30.874653   13490 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 17:27:30.874720   13490 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 17:27:30.874769   13490 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001067449s
	I0802 17:27:30.874859   13490 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 17:27:30.874941   13490 kubeadm.go:310] [api-check] The API server is healthy after 4.501647829s
	I0802 17:27:30.875085   13490 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 17:27:30.875265   13490 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 17:27:30.875338   13490 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 17:27:30.875498   13490 kubeadm.go:310] [mark-control-plane] Marking the node addons-723198 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 17:27:30.875562   13490 kubeadm.go:310] [bootstrap-token] Using token: 5v5eff.nv31up8e73ig1kvl
	I0802 17:27:30.876942   13490 out.go:204]   - Configuring RBAC rules ...
	I0802 17:27:30.877032   13490 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 17:27:30.877102   13490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 17:27:30.877239   13490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 17:27:30.877369   13490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 17:27:30.877471   13490 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 17:27:30.877549   13490 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 17:27:30.877668   13490 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 17:27:30.877716   13490 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 17:27:30.877758   13490 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 17:27:30.877764   13490 kubeadm.go:310] 
	I0802 17:27:30.877815   13490 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 17:27:30.877822   13490 kubeadm.go:310] 
	I0802 17:27:30.877882   13490 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 17:27:30.877888   13490 kubeadm.go:310] 
	I0802 17:27:30.877914   13490 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 17:27:30.877963   13490 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 17:27:30.878006   13490 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 17:27:30.878011   13490 kubeadm.go:310] 
	I0802 17:27:30.878054   13490 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 17:27:30.878061   13490 kubeadm.go:310] 
	I0802 17:27:30.878102   13490 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 17:27:30.878107   13490 kubeadm.go:310] 
	I0802 17:27:30.878149   13490 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 17:27:30.878227   13490 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 17:27:30.878293   13490 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 17:27:30.878299   13490 kubeadm.go:310] 
	I0802 17:27:30.878371   13490 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 17:27:30.878438   13490 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 17:27:30.878443   13490 kubeadm.go:310] 
	I0802 17:27:30.878519   13490 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5v5eff.nv31up8e73ig1kvl \
	I0802 17:27:30.878605   13490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56c05203a90a2cde2ed0a4497a63d8ef991acddfe1f197b69a9e67f698b6fd65 \
	I0802 17:27:30.878625   13490 kubeadm.go:310] 	--control-plane 
	I0802 17:27:30.878630   13490 kubeadm.go:310] 
	I0802 17:27:30.878706   13490 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 17:27:30.878712   13490 kubeadm.go:310] 
	I0802 17:27:30.878777   13490 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5v5eff.nv31up8e73ig1kvl \
	I0802 17:27:30.878921   13490 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56c05203a90a2cde2ed0a4497a63d8ef991acddfe1f197b69a9e67f698b6fd65 
	I0802 17:27:30.878936   13490 cni.go:84] Creating CNI manager for ""
	I0802 17:27:30.878949   13490 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 17:27:30.880264   13490 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 17:27:30.881467   13490 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 17:27:30.892767   13490 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 17:27:30.909726   13490 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 17:27:30.909833   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:30.909849   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-723198 minikube.k8s.io/updated_at=2024_08_02T17_27_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=addons-723198 minikube.k8s.io/primary=true
	I0802 17:27:30.920476   13490 ops.go:34] apiserver oom_adj: -16
	I0802 17:27:31.019703   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:31.520616   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:32.020026   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:32.520646   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:33.020530   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:33.519796   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:34.019802   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:34.520762   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:35.020013   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:35.520355   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:36.020147   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:36.520619   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:37.020679   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:37.519764   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:38.020012   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:38.520049   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:39.020000   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:39.520584   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:40.020202   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:40.519761   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:41.020033   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:41.520765   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:42.020004   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:42.520410   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:43.020091   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:43.519972   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:44.020534   13490 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:27:44.118085   13490 kubeadm.go:1113] duration metric: took 13.208304628s to wait for elevateKubeSystemPrivileges
	I0802 17:27:44.118139   13490 kubeadm.go:394] duration metric: took 23.412677475s to StartCluster
	I0802 17:27:44.118164   13490 settings.go:142] acquiring lock: {Name:mk3ebe5d16f66f61a1d90b2e66af2fc9564c2fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:44.118299   13490 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:27:44.118856   13490 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/kubeconfig: {Name:mked0d3c6650288daaae03ce253ff0240f6cbf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:44.119457   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 17:27:44.119482   13490 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0802 17:27:44.119544   13490 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0802 17:27:44.119644   13490 addons.go:69] Setting yakd=true in profile "addons-723198"
	I0802 17:27:44.119673   13490 addons.go:69] Setting helm-tiller=true in profile "addons-723198"
	I0802 17:27:44.119689   13490 addons.go:234] Setting addon yakd=true in "addons-723198"
	I0802 17:27:44.119695   13490 addons.go:69] Setting ingress=true in profile "addons-723198"
	I0802 17:27:44.119699   13490 addons.go:69] Setting inspektor-gadget=true in profile "addons-723198"
	I0802 17:27:44.119701   13490 config.go:182] Loaded profile config "addons-723198": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:27:44.119711   13490 addons.go:234] Setting addon helm-tiller=true in "addons-723198"
	I0802 17:27:44.119715   13490 addons.go:234] Setting addon ingress=true in "addons-723198"
	I0802 17:27:44.119724   13490 addons.go:234] Setting addon inspektor-gadget=true in "addons-723198"
	I0802 17:27:44.119727   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119754   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119765   13490 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-723198"
	I0802 17:27:44.119766   13490 addons.go:69] Setting metrics-server=true in profile "addons-723198"
	I0802 17:27:44.119766   13490 addons.go:69] Setting storage-provisioner=true in profile "addons-723198"
	I0802 17:27:44.119778   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119785   13490 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-723198"
	I0802 17:27:44.119788   13490 addons.go:234] Setting addon metrics-server=true in "addons-723198"
	I0802 17:27:44.119793   13490 addons.go:234] Setting addon storage-provisioner=true in "addons-723198"
	I0802 17:27:44.119802   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119815   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119815   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119757   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.119678   13490 addons.go:69] Setting registry=true in profile "addons-723198"
	I0802 17:27:44.120176   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120182   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120186   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120189   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120198   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120199   13490 addons.go:234] Setting addon registry=true in "addons-723198"
	I0802 17:27:44.120202   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120208   13490 addons.go:69] Setting gcp-auth=true in profile "addons-723198"
	I0802 17:27:44.120214   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120224   13490 mustload.go:65] Loading cluster: addons-723198
	I0802 17:27:44.120244   13490 addons.go:69] Setting volumesnapshots=true in profile "addons-723198"
	I0802 17:27:44.120260   13490 addons.go:234] Setting addon volumesnapshots=true in "addons-723198"
	I0802 17:27:44.120264   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120270   13490 addons.go:69] Setting volcano=true in profile "addons-723198"
	I0802 17:27:44.120287   13490 addons.go:69] Setting cloud-spanner=true in profile "addons-723198"
	I0802 17:27:44.120293   13490 addons.go:234] Setting addon volcano=true in "addons-723198"
	I0802 17:27:44.120301   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120308   13490 addons.go:234] Setting addon cloud-spanner=true in "addons-723198"
	I0802 17:27:44.120316   13490 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-723198"
	I0802 17:27:44.120320   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120348   13490 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-723198"
	I0802 17:27:44.119666   13490 addons.go:69] Setting ingress-dns=true in profile "addons-723198"
	I0802 17:27:44.120370   13490 addons.go:234] Setting addon ingress-dns=true in "addons-723198"
	I0802 17:27:44.120173   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120385   13490 config.go:182] Loaded profile config "addons-723198": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:27:44.120390   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.119755   13490 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-723198"
	I0802 17:27:44.120451   13490 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-723198"
	I0802 17:27:44.120535   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.120548   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120566   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120700   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.120703   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120725   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.120733   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120225   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.120791   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.120817   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.120878   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.121046   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.121070   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.121074   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.121096   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.121128   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.121162   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.121179   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.121244   13490 addons.go:69] Setting default-storageclass=true in profile "addons-723198"
	I0802 17:27:44.121285   13490 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-723198"
	I0802 17:27:44.121376   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.121403   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.121478   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.121495   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.125167   13490 out.go:177] * Verifying Kubernetes components...
	I0802 17:27:44.126748   13490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:27:44.141876   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41851
	I0802 17:27:44.142300   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I0802 17:27:44.142465   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.142652   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.143198   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.143229   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.143569   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.143593   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.143668   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.143887   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.144013   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.144303   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I0802 17:27:44.144565   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.144619   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.145206   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0802 17:27:44.151458   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.151515   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.151565   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.151603   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.154680   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0802 17:27:44.154756   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.154853   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0802 17:27:44.154921   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46267
	I0802 17:27:44.155074   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0802 17:27:44.155177   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.155283   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.155762   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.155783   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.156179   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.156254   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.156289   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.156321   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.156335   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.156391   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.156847   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.156975   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.156989   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.157390   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.157426   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.159646   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.159691   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.159650   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.159660   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.160027   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.160055   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.160246   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.160262   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.160391   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.160408   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.160562   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.160585   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.160594   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.161126   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.161151   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.164015   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.164188   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.164201   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.164603   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.164622   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.167518   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.172626   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.172671   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.178919   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0802 17:27:44.179639   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.180298   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.180325   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.180682   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.181189   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.181232   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.192477   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0802 17:27:44.192661   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0802 17:27:44.193010   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.193115   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.193727   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.193755   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.194097   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.194300   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0802 17:27:44.194417   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.194510   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46033
	I0802 17:27:44.194758   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.194779   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.194956   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.195312   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0802 17:27:44.195402   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.195437   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.195907   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.195932   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.196005   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.196118   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.196156   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.196673   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.196692   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.196789   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.196803   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.197001   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.197133   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.197195   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.197372   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.197709   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.198458   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.198495   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.199829   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.200274   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0802 17:27:44.200998   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.201656   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.201672   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.201754   13490 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0802 17:27:44.202181   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.202393   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.203000   13490 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-723198"
	I0802 17:27:44.203037   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.203068   13490 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0802 17:27:44.203083   13490 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0802 17:27:44.203109   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.203373   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.203409   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.204993   13490 addons.go:234] Setting addon default-storageclass=true in "addons-723198"
	I0802 17:27:44.205033   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:44.205378   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.205424   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.205653   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40763
	I0802 17:27:44.206116   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.206519   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.206868   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.206886   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.207353   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.207374   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.207511   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.207649   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.207773   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.207876   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.208505   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.209015   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.209052   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.209248   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42481
	I0802 17:27:44.209548   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.210133   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.210148   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.210574   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.210851   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.211862   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.212491   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.213844   13490 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0802 17:27:44.214710   13490 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0802 17:27:44.215744   13490 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0802 17:27:44.215761   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0802 17:27:44.215779   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.216427   13490 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0802 17:27:44.216439   13490 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0802 17:27:44.216456   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.217653   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0802 17:27:44.217770   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
	I0802 17:27:44.218378   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.218757   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.218935   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.218961   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.219286   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.219306   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.219364   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.219637   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.220198   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.220455   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.220735   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.221320   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.221913   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.221980   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.222244   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.222427   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.222629   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.222682   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.222723   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0802 17:27:44.223187   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.223445   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.223880   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.223898   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.224002   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.224017   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.224305   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.224348   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.224518   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.224587   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.224644   13490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0802 17:27:44.224740   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.224872   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.225423   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40499
	I0802 17:27:44.226890   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.226912   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.227230   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33813
	I0802 17:27:44.227252   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.227890   13490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:27:44.229085   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0802 17:27:44.229564   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.230167   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.230183   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.230449   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.230469   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.230665   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.230868   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.230911   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.231550   13490 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 17:27:44.231593   13490 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0802 17:27:44.232176   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.232214   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.232605   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.233123   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.233146   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.233537   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.233967   13490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:27:44.234091   13490 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:27:44.234105   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 17:27:44.234119   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.234197   13490 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0802 17:27:44.234202   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0802 17:27:44.234210   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.236934   13490 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0802 17:27:44.236951   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0802 17:27:44.237188   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.237252   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0802 17:27:44.237827   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42191
	I0802 17:27:44.237958   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.237977   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.238337   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.238859   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.238891   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.239055   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.239445   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.239809   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.239946   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.240094   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0802 17:27:44.240204   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.240436   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.240805   13490 out.go:177]   - Using image docker.io/registry:2.8.3
	I0802 17:27:44.241236   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.241255   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.241268   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.241699   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.241841   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.241958   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.241981   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.242224   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.242242   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.242307   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.242320   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.242776   13490 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0802 17:27:44.242876   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.242936   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.243102   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.243722   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.243757   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.243920   13490 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0802 17:27:44.243933   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0802 17:27:44.243953   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.243975   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.245052   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.245053   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.245100   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.245308   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.245710   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:44.245738   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:44.246051   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.246233   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.246250   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.246299   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.246540   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.246895   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.247228   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.247673   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.248072   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.248089   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.248303   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.248476   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.248659   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.248781   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.251929   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I0802 17:27:44.252397   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.252889   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.252904   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.253279   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.253480   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.258355   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.260252   13490 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0802 17:27:44.260914   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37733
	I0802 17:27:44.261394   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.261440   13490 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0802 17:27:44.261451   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0802 17:27:44.261468   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.261892   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.261905   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.263163   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42979
	I0802 17:27:44.263170   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.263403   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.263592   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.264174   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.264190   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.264517   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.264694   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.265626   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.266000   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.266018   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.266221   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.266278   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.266334   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0802 17:27:44.266690   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.266758   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.266912   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.267102   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.267391   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.267498   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.267515   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.267810   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0802 17:27:44.267922   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.269179   13490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0802 17:27:44.269207   13490 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0802 17:27:44.269225   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.269291   13490 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0802 17:27:44.269820   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.269841   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0802 17:27:44.270704   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.270911   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40563
	I0802 17:27:44.271358   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.271390   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.271414   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.271798   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.271976   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.272471   13490 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0802 17:27:44.272566   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.272677   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.272690   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.272937   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34361
	I0802 17:27:44.272965   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.273534   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.273558   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.273684   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.273720   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.273789   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.273828   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.273957   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.273961   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.273999   13490 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0802 17:27:44.274088   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.274493   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.274507   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.274942   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.275155   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.275208   13490 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0802 17:27:44.275341   13490 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0802 17:27:44.275358   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0802 17:27:44.275378   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.275404   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:44.275574   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.276764   13490 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0802 17:27:44.277189   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
	I0802 17:27:44.277400   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.277417   13490 out.go:177]   - Using image docker.io/busybox:stable
	I0802 17:27:44.278118   13490 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0802 17:27:44.278138   13490 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0802 17:27:44.278159   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.278167   13490 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0802 17:27:44.278181   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0802 17:27:44.278202   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.278777   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0802 17:27:44.279322   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.279805   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.279890   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.279987   13490 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0802 17:27:44.280064   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.280269   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.280427   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.280562   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.281359   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0802 17:27:44.281420   13490 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0802 17:27:44.281496   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0802 17:27:44.281513   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.281668   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.282096   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.282145   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.282443   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.282580   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.282612   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.282744   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.282854   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.282907   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.282933   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.283163   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.283326   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.283464   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.283561   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0802 17:27:44.283669   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.284675   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.285075   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.285117   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.285232   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.285418   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.285578   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.285707   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.286455   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0802 17:27:44.288635   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0802 17:27:44.290515   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0802 17:27:44.291849   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0802 17:27:44.292761   13490 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0802 17:27:44.293972   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0802 17:27:44.293991   13490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0802 17:27:44.294012   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.296957   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.297296   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.297322   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.297534   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.297771   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.297908   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.298075   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.303705   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:44.304128   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:44.304146   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:44.304429   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:44.304590   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	W0802 17:27:44.305643   13490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48250->192.168.39.195:22: read: connection reset by peer
	I0802 17:27:44.305671   13490 retry.go:31] will retry after 346.030432ms: ssh: handshake failed: read tcp 192.168.39.1:48250->192.168.39.195:22: read: connection reset by peer
	W0802 17:27:44.306053   13490 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48256->192.168.39.195:22: read: connection reset by peer
	I0802 17:27:44.306073   13490 retry.go:31] will retry after 211.112703ms: ssh: handshake failed: read tcp 192.168.39.1:48256->192.168.39.195:22: read: connection reset by peer
	I0802 17:27:44.306117   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:44.306424   13490 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 17:27:44.306440   13490 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 17:27:44.306454   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:44.309167   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.309588   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:44.309616   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:44.309748   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:44.309944   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:44.310107   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:44.310259   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:44.741033   13490 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0802 17:27:44.741056   13490 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0802 17:27:44.793883   13490 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0802 17:27:44.793904   13490 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0802 17:27:44.821366   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0802 17:27:44.882890   13490 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0802 17:27:44.882917   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0802 17:27:44.884322   13490 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0802 17:27:44.884341   13490 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0802 17:27:44.981835   13490 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0802 17:27:44.981860   13490 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0802 17:27:45.092922   13490 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0802 17:27:45.092947   13490 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0802 17:27:45.188473   13490 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0802 17:27:45.188496   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0802 17:27:45.296369   13490 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0802 17:27:45.296391   13490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0802 17:27:45.311265   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0802 17:27:45.379656   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0802 17:27:45.396033   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0802 17:27:45.396057   13490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0802 17:27:45.439494   13490 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0802 17:27:45.439518   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0802 17:27:45.440559   13490 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0802 17:27:45.440578   13490 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0802 17:27:45.444718   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0802 17:27:45.503656   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0802 17:27:45.515473   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:27:45.519678   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0802 17:27:45.563886   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0802 17:27:45.563920   13490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0802 17:27:45.572693   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 17:27:45.653514   13490 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0802 17:27:45.653534   13490 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0802 17:27:45.668745   13490 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0802 17:27:45.668767   13490 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0802 17:27:45.692718   13490 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.565940593s)
	I0802 17:27:45.692787   13490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:27:45.693073   13490 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.573589314s)
	I0802 17:27:45.693248   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 17:27:45.725258   13490 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0802 17:27:45.725285   13490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0802 17:27:45.741729   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0802 17:27:45.784056   13490 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0802 17:27:45.784081   13490 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0802 17:27:45.788472   13490 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 17:27:45.788490   13490 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0802 17:27:45.820945   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0802 17:27:45.820969   13490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0802 17:27:45.824565   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0802 17:27:45.860250   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0802 17:27:45.880407   13490 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0802 17:27:45.880442   13490 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0802 17:27:45.970472   13490 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0802 17:27:45.970497   13490 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0802 17:27:46.000584   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 17:27:46.031313   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0802 17:27:46.031343   13490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0802 17:27:46.061153   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0802 17:27:46.061186   13490 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0802 17:27:46.156622   13490 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0802 17:27:46.156649   13490 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0802 17:27:46.213874   13490 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0802 17:27:46.213895   13490 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0802 17:27:46.363970   13490 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:27:46.363997   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0802 17:27:46.489616   13490 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0802 17:27:46.489647   13490 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0802 17:27:46.501591   13490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0802 17:27:46.501621   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0802 17:27:46.665012   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:27:46.786969   13490 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0802 17:27:46.786993   13490 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0802 17:27:46.789675   13490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0802 17:27:46.789696   13490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0802 17:27:46.956174   13490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0802 17:27:46.956204   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0802 17:27:46.958710   13490 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0802 17:27:46.958728   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0802 17:27:47.166125   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0802 17:27:47.192020   13490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0802 17:27:47.192044   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0802 17:27:47.396051   13490 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0802 17:27:47.396081   13490 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0802 17:27:47.770870   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0802 17:27:50.901959   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.59066178s)
	I0802 17:27:50.902002   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:50.902018   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:50.901962   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.080560174s)
	I0802 17:27:50.902108   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:50.902128   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:50.902300   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:50.902320   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:50.902330   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:50.902338   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:50.902428   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:50.902430   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:50.902447   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:50.902465   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:50.902474   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:50.902645   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:50.902664   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:50.902731   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:50.902749   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:51.129276   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:51.129302   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:51.129626   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:51.129642   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:51.129658   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:51.275283   13490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0802 17:27:51.275323   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:51.278704   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:51.279225   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:51.279255   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:51.279536   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:51.279777   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:51.279970   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:51.280212   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:51.787805   13490 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0802 17:27:52.041130   13490 addons.go:234] Setting addon gcp-auth=true in "addons-723198"
	I0802 17:27:52.041190   13490 host.go:66] Checking if "addons-723198" exists ...
	I0802 17:27:52.041513   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:52.041542   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:52.058036   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0802 17:27:52.058485   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:52.058967   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:52.058987   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:52.059400   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:52.059836   13490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:27:52.059867   13490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:52.075525   13490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I0802 17:27:52.075945   13490 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:52.076478   13490 main.go:141] libmachine: Using API Version  1
	I0802 17:27:52.076498   13490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:52.076861   13490 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:52.077070   13490 main.go:141] libmachine: (addons-723198) Calling .GetState
	I0802 17:27:52.078862   13490 main.go:141] libmachine: (addons-723198) Calling .DriverName
	I0802 17:27:52.079071   13490 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0802 17:27:52.079092   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHHostname
	I0802 17:27:52.081804   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:52.082222   13490 main.go:141] libmachine: (addons-723198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:78:5c", ip: ""} in network mk-addons-723198: {Iface:virbr1 ExpiryTime:2024-08-02 18:26:56 +0000 UTC Type:0 Mac:52:54:00:6f:78:5c Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-723198 Clientid:01:52:54:00:6f:78:5c}
	I0802 17:27:52.082252   13490 main.go:141] libmachine: (addons-723198) DBG | domain addons-723198 has defined IP address 192.168.39.195 and MAC address 52:54:00:6f:78:5c in network mk-addons-723198
	I0802 17:27:52.082402   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHPort
	I0802 17:27:52.082578   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHKeyPath
	I0802 17:27:52.082737   13490 main.go:141] libmachine: (addons-723198) Calling .GetSSHUsername
	I0802 17:27:52.082896   13490 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/addons-723198/id_rsa Username:docker}
	I0802 17:27:54.112960   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.733269219s)
	I0802 17:27:54.113012   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:54.113022   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:54.113284   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:54.113302   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:54.113311   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:54.113319   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:54.113550   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:54.113567   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:54.113577   13490 addons.go:475] Verifying addon ingress=true in "addons-723198"
	I0802 17:27:54.115542   13490 out.go:177] * Verifying ingress addon...
	I0802 17:27:54.117388   13490 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0802 17:27:54.124482   13490 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0802 17:27:54.124508   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:54.644844   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:55.129064   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:55.672657   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:56.133755   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:56.466512   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.021768815s)
	I0802 17:27:56.466559   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.466573   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.466559   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.962870248s)
	I0802 17:27:56.466653   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.951154806s)
	I0802 17:27:56.466657   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.466703   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.466724   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.94701536s)
	I0802 17:27:56.466684   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.466750   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.466747   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.466776   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.466858   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.466876   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.466887   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.466894   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.467006   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.467045   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.467060   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.467069   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.467078   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.467099   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.467101   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.467069   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.467111   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.467115   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.467123   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.467125   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.467133   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.467163   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.467134   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.467176   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.467184   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.467191   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.467258   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.467280   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.467288   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.467711   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.894091792s)
	I0802 17:27:56.467748   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.467760   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.467841   13490 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.775042606s)
	I0802 17:27:56.468599   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.468625   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.468632   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.468640   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.468647   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.468696   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.468714   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.468720   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.468860   13490 node_ready.go:35] waiting up to 6m0s for node "addons-723198" to be "Ready" ...
	I0802 17:27:56.468937   13490 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.775665975s)
	I0802 17:27:56.468957   13490 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 17:27:56.469096   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.469129   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.469140   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.470031   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.728275781s)
	I0802 17:27:56.470061   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.470075   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.470158   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.645569909s)
	I0802 17:27:56.470181   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.470190   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.470257   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.60998484s)
	I0802 17:27:56.470271   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.470279   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.470335   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.470343   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.470373   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.469760834s)
	I0802 17:27:56.470401   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.470412   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.470543   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.80549873s)
	W0802 17:27:56.470572   13490 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0802 17:27:56.470599   13490 retry.go:31] will retry after 212.203517ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0802 17:27:56.470675   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.304518075s)
	I0802 17:27:56.470693   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.470703   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.471322   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.471374   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.471405   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.471421   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.471437   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.471453   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.471518   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.471547   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.471562   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.471578   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.471593   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.471735   13490 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-723198 service yakd-dashboard -n yakd-dashboard
	
	I0802 17:27:56.471914   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.471944   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.471954   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.471966   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.471976   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.472217   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.472245   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.472270   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.472278   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.472288   13490 addons.go:475] Verifying addon metrics-server=true in "addons-723198"
	I0802 17:27:56.472423   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.472432   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.472439   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.472446   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.472580   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.472589   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.472710   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.472720   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.472731   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.472804   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.472842   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.472856   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.472864   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.473248   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.473278   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.473286   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.473326   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.473335   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.473342   13490 addons.go:475] Verifying addon registry=true in "addons-723198"
	I0802 17:27:56.475069   13490 out.go:177] * Verifying registry addon...
	I0802 17:27:56.476929   13490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0802 17:27:56.495566   13490 node_ready.go:49] node "addons-723198" has status "Ready":"True"
	I0802 17:27:56.495589   13490 node_ready.go:38] duration metric: took 26.709315ms for node "addons-723198" to be "Ready" ...
	I0802 17:27:56.495599   13490 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:27:56.536059   13490 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0802 17:27:56.536088   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:56.590332   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:56.590361   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:56.590656   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:56.590714   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:56.590733   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:56.599391   13490 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8cscl" in "kube-system" namespace to be "Ready" ...
	I0802 17:27:56.674072   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:56.683627   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:27:56.717132   13490 pod_ready.go:92] pod "coredns-7db6d8ff4d-8cscl" in "kube-system" namespace has status "Ready":"True"
	I0802 17:27:56.717152   13490 pod_ready.go:81] duration metric: took 117.737708ms for pod "coredns-7db6d8ff4d-8cscl" in "kube-system" namespace to be "Ready" ...
	I0802 17:27:56.717164   13490 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-skrgn" in "kube-system" namespace to be "Ready" ...
	I0802 17:27:56.989812   13490 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-723198" context rescaled to 1 replicas
	I0802 17:27:56.997137   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:57.184784   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:57.517332   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:57.563004   13490 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.483910857s)
	I0802 17:27:57.563553   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.792619428s)
	I0802 17:27:57.563602   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:57.563616   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:57.563947   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:57.563966   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:57.563976   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:57.563985   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:57.564225   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:57.564258   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:57.564266   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:57.564282   13490 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-723198"
	I0802 17:27:57.564691   13490 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:27:57.565826   13490 out.go:177] * Verifying csi-hostpath-driver addon...
	I0802 17:27:57.567165   13490 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0802 17:27:57.568145   13490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0802 17:27:57.568363   13490 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0802 17:27:57.568384   13490 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0802 17:27:57.641784   13490 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0802 17:27:57.641805   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:27:57.645876   13490 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0802 17:27:57.645894   13490 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0802 17:27:57.663629   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:57.731431   13490 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0802 17:27:57.731456   13490 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0802 17:27:57.787013   13490 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0802 17:27:58.021707   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:58.106218   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:27:58.124599   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:58.482883   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:58.578043   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:27:58.627953   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:58.723547   13490 pod_ready.go:102] pod "coredns-7db6d8ff4d-skrgn" in "kube-system" namespace has status "Ready":"False"
	I0802 17:27:58.835280   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.151601638s)
	I0802 17:27:58.835395   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:58.835413   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:58.835772   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:58.835831   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:58.835846   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:58.835854   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:58.835772   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:58.836089   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:58.836145   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:58.836121   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:58.982808   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:59.075466   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:27:59.134936   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:59.228566   13490 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441505029s)
	I0802 17:27:59.228612   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:59.228622   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:59.228956   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:59.228975   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:59.228985   13490 main.go:141] libmachine: Making call to close driver server
	I0802 17:27:59.229006   13490 main.go:141] libmachine: (addons-723198) Calling .Close
	I0802 17:27:59.229003   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:59.229278   13490 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:27:59.229295   13490 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:27:59.229299   13490 main.go:141] libmachine: (addons-723198) DBG | Closing plugin on server side
	I0802 17:27:59.231361   13490 addons.go:475] Verifying addon gcp-auth=true in "addons-723198"
	I0802 17:27:59.233213   13490 out.go:177] * Verifying gcp-auth addon...
	I0802 17:27:59.235045   13490 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0802 17:27:59.244023   13490 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0802 17:27:59.481500   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:27:59.575201   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:27:59.621654   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:27:59.982464   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:00.081080   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:00.122894   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:00.481956   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:00.575421   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:00.623106   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:00.982531   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:01.074949   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:01.121914   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:01.225269   13490 pod_ready.go:102] pod "coredns-7db6d8ff4d-skrgn" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:01.481573   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:01.574278   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:01.622494   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:01.983114   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:02.074140   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:02.359144   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:02.482892   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:02.573654   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:02.624010   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:02.724108   13490 pod_ready.go:97] pod "coredns-7db6d8ff4d-skrgn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:28:02 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.195 HostIPs:[{IP:192.168.39
.195}] PodIP: PodIPs:[] StartTime:2024-08-02 17:27:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-02 17:27:45 +0000 UTC,FinishedAt:2024-08-02 17:28:01 +0000 UTC,ContainerID:docker://9096e9a85c5618c1d4eba2f95d8698a5e10db2e251eee945701b3cb18e3f7be0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://9096e9a85c5618c1d4eba2f95d8698a5e10db2e251eee945701b3cb18e3f7be0 Started:0xc0026343b0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0802 17:28:02.724180   13490 pod_ready.go:81] duration metric: took 6.007007014s for pod "coredns-7db6d8ff4d-skrgn" in "kube-system" namespace to be "Ready" ...
	E0802 17:28:02.724196   13490 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-skrgn" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:28:02 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-02 17:27:43 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.195 HostIPs:[{IP:192.168.39.195}] PodIP: PodIPs:[] StartTime:2024-08-02 17:27:43 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-02 17:27:45 +0000 UTC,FinishedAt:2024-08-02 17:28:01 +0000 UTC,ContainerID:docker://9096e9a85c5618c1d4eba2f95d8698a5e10db2e251eee945701b3cb18e3f7be0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://9096e9a85c5618c1d4eba2f95d8698a5e10db2e251eee945701b3cb18e3f7be0 Started:0xc0026343b0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0802 17:28:02.724206   13490 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.728992   13490 pod_ready.go:92] pod "etcd-addons-723198" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:02.729020   13490 pod_ready.go:81] duration metric: took 4.802898ms for pod "etcd-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.729035   13490 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.733933   13490 pod_ready.go:92] pod "kube-apiserver-addons-723198" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:02.733957   13490 pod_ready.go:81] duration metric: took 4.913107ms for pod "kube-apiserver-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.733969   13490 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.738809   13490 pod_ready.go:92] pod "kube-controller-manager-addons-723198" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:02.738829   13490 pod_ready.go:81] duration metric: took 4.852082ms for pod "kube-controller-manager-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.738844   13490 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4wn5" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.747525   13490 pod_ready.go:92] pod "kube-proxy-b4wn5" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:02.747544   13490 pod_ready.go:81] duration metric: took 8.693123ms for pod "kube-proxy-b4wn5" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.747552   13490 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:02.982439   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:03.074849   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:03.122008   13490 pod_ready.go:92] pod "kube-scheduler-addons-723198" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:03.122034   13490 pod_ready.go:81] duration metric: took 374.47638ms for pod "kube-scheduler-addons-723198" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:03.122044   13490 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:03.122435   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:03.481666   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:03.575324   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:03.621237   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:03.981888   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:04.073906   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:04.121803   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:04.482216   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:04.848812   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:04.853788   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:04.983809   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:05.074236   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:05.121357   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:05.127643   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:05.481105   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:05.573473   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:05.621432   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:05.981520   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:06.076226   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:06.122470   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:06.482415   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:06.575931   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:06.622188   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:06.981887   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:07.074483   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:07.121584   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:07.127813   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:07.481966   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:07.573688   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:07.621315   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:07.982860   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:08.073621   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:08.121940   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:08.481305   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:08.573476   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:08.621446   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:08.983964   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:09.074777   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:09.121426   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:09.127977   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:09.481869   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:09.575970   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:09.622311   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:09.981289   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:10.073471   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:10.121538   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:10.482196   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:10.577971   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:10.621519   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:10.981869   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:11.074221   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:11.121421   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:11.481795   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:11.573388   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:11.621531   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:11.627793   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:11.982284   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:12.073562   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:12.121611   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:12.481479   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:12.572623   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:12.621817   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:12.982127   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:13.076205   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:13.126214   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:13.482956   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:13.585994   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:13.621955   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:13.629303   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:13.982116   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:14.073720   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:14.121933   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:14.481860   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:14.574336   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:14.621577   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:14.981678   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:15.074861   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:15.121862   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:15.483320   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:15.575459   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:15.622137   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:15.982103   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:16.074222   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:16.121820   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:16.127859   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:16.481859   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:16.573889   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:16.622323   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:16.981099   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:17.074748   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:17.121830   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:17.482278   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:17.574909   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:17.621927   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:17.982130   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:18.074429   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:18.122056   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:18.481388   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:18.576674   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:18.621575   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:18.627852   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:18.981510   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:19.073864   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:19.121827   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:19.482309   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:19.577677   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:19.621839   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:19.982300   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:20.074121   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:20.122220   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:20.481671   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:20.574328   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:20.621286   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:20.628556   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:20.982212   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:21.076244   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:21.121835   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:21.481029   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:21.573584   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:21.621536   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:21.982105   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:22.073945   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:22.121793   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:22.483369   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:22.859752   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:22.860381   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:22.861572   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:22.986079   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:23.074279   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:23.121523   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:23.481575   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:23.573624   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:23.622913   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:23.981202   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:24.073388   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:24.121532   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:24.482526   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:24.573673   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:24.622928   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:24.982333   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:25.075246   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:25.278004   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:25.280411   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:25.481869   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:25.573821   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:25.621593   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:25.982080   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:26.076094   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:26.122544   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:26.481252   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:26.576610   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:26.621539   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:26.983054   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:27.075297   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:27.123110   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:27.482541   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:27.574277   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:27.621539   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:27.627898   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:27.981674   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:28.073393   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:28.122368   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:28.483024   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:28.573206   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:28.623425   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:28.981966   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:29.074321   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:29.122424   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:29.480949   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:29.574641   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:29.621542   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:29.628357   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:29.982383   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:30.075807   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:30.122126   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:30.482314   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:30.575540   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:30.621451   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:30.981239   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:31.073673   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:31.121515   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:31.481714   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:31.574229   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:31.621642   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:31.981548   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:32.073346   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:32.121560   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:32.127554   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:32.481770   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:32.904988   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:32.905893   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:32.983050   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:33.073498   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:33.123004   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:33.482320   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:33.574322   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:33.622054   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:33.982613   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:34.073744   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:34.121253   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:34.127767   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:34.482109   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:34.574937   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:34.628539   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:34.981490   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:35.074466   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:35.121474   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:35.481640   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:35.573509   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:35.621260   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:35.982185   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:36.075773   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:36.414307   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:36.416909   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:36.481777   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:36.573167   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:36.622105   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:36.983078   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:37.076252   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:37.123915   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:37.481563   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:37.573853   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:37.621674   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:37.981556   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:38.073509   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:38.121626   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:38.481581   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:38.573395   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:38.621524   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:38.627918   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:38.982059   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:39.073570   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:39.121792   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:39.481163   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:39.573833   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:39.622000   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:39.981927   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:40.075004   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:40.121755   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:40.481342   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:40.573748   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:40.621869   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:40.982388   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:41.077934   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:41.122755   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:41.128747   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:41.482509   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:41.576559   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:41.622325   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:41.981180   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:42.075915   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:42.121790   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:42.481965   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:42.573020   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:42.621882   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:42.981256   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:43.075421   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:43.121327   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:43.675629   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:43.675922   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:43.675945   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:43.678194   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:43.982453   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:44.074059   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:44.121921   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:44.486663   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:44.574009   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:44.621735   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:44.982132   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:45.076904   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:45.122746   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:45.481530   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:45.573020   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:45.622404   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:45.981451   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:46.076309   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:46.121878   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:46.127857   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:46.481546   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:46.573107   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:46.622280   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:46.982683   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:47.073226   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:47.121140   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:47.482318   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:47.573925   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:47.622016   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:47.981813   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:48.074068   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:48.122538   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:48.128365   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:48.481685   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:48.573675   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:48.621455   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:48.981934   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:49.073142   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:49.122344   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:49.481858   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:49.573564   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:49.621730   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:49.981976   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:50.073461   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:50.121231   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:50.481333   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:50.574044   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:50.621873   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:50.627227   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:50.981840   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:51.074456   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:51.121430   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:51.482568   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:51.573572   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:51.621732   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:51.982487   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:52.073946   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:52.122105   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:52.481309   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:52.575386   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:52.621241   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:52.981681   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:53.073258   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:53.121959   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:53.126955   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:53.481549   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:53.572669   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:53.621577   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:53.981411   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:54.074268   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:54.121131   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:54.482028   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:54.577777   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:54.622056   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:54.982026   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.075381   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:55.121677   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:55.127604   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:55.481189   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.575832   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:55.622151   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:55.981792   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:56.073527   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:56.121538   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:56.482117   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:56.574269   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:56.621098   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:56.981905   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:57.075419   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:57.121712   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:57.481780   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:57.573505   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:57.622107   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:57.626562   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:57.981503   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:58.074525   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:58.121708   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:58.482590   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:58.573819   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:58.622101   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:58.981506   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:59.075246   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:59.126034   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:59.482557   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:59.573274   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:59.621468   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:59.627902   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:59.982542   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:00.073905   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:00.122567   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:00.481773   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:00.573159   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:00.622222   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:00.983084   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:01.073437   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:01.121386   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:01.481763   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:01.574295   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:01.621201   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:01.982166   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:02.074078   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:02.121757   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:02.127138   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:02.483075   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:02.573674   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:02.621791   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:02.982016   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:03.076204   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:03.121108   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:03.481105   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:03.577631   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:03.621914   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:03.983418   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:04.074277   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:04.122260   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:04.127379   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:04.482468   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:04.575803   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:04.622157   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:04.981832   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:05.073273   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:05.121496   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:05.482134   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:05.573657   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:05.621208   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:05.981793   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:06.073318   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:06.121689   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:06.127658   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:06.482432   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:06.575765   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:06.621666   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:06.981757   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:07.073767   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:07.121730   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:07.482289   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:07.573743   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:07.622232   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:07.982051   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:08.073472   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:08.121881   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:08.481867   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:08.574223   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:08.622251   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:08.626921   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:08.981656   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:09.072913   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:09.122065   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:09.481863   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:09.573219   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:09.622008   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:09.982090   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:10.075686   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:10.121538   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:10.482200   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:10.575114   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:10.622204   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:10.981686   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:11.075083   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:11.121899   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:11.127004   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:11.481202   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:11.574816   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:11.621852   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:11.982403   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:12.074005   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:12.121817   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:12.481281   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:12.576286   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:12.621946   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:12.981969   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:13.073169   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:13.122861   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:13.127712   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:13.481381   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:13.575693   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:13.623487   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:13.983722   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:14.073527   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:14.121635   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:14.482386   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:14.573655   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:14.621933   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:14.981939   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:15.075089   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:15.122541   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:15.482450   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:15.576814   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:15.622439   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:15.626631   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:15.982558   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:16.073736   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:16.121745   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:16.482755   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:16.573709   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:16.622140   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:16.981506   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:17.073070   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:17.122153   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:17.482849   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:17.573187   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:17.621152   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:17.627429   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:17.983330   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:18.074761   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:18.121657   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:18.481757   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:18.575458   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:18.622125   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:18.982368   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:19.073829   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:19.121739   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:19.481736   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:19.573544   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:19.622231   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:19.982163   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:20.073851   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:20.121569   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:20.128340   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:20.482344   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:20.573566   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:20.621350   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:20.981609   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:21.072641   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:21.122162   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:21.481927   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:21.574125   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:21.621818   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:21.983605   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:22.079036   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:22.124271   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:22.134826   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:22.481918   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:22.573557   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:22.621872   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:22.982084   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:23.075193   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:23.121365   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:23.481748   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:23.577513   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:23.621199   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:23.986056   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:24.073557   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:24.121771   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:24.482396   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:24.575186   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:24.791087   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:24.795828   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:24.982370   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:25.076523   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:25.121982   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:25.482712   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:25.573423   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:25.621735   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:25.981379   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:26.075893   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:26.123258   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:26.481705   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:26.574239   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:26.622161   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:26.983430   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:27.073316   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:27.121668   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:27.127862   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:27.481635   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:27.573339   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:27.621769   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:27.981864   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:28.073876   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:28.122327   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:28.482140   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:28.575601   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:28.621383   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:28.981237   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:29.074002   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:29.122205   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:29.482337   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:29.576086   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:29.622004   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:29.626703   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:29.982907   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:30.074196   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:30.121979   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:30.482556   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:30.577129   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:30.622991   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:30.981168   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:31.073979   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:31.121932   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:31.481682   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:31.573441   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:31.622537   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:31.627000   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:31.981427   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:32.075899   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:32.121673   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:32.481659   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:32.573245   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:32.622571   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:32.982449   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:33.074723   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:33.121598   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:33.481475   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:33.574520   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:33.623159   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:33.627146   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:33.982005   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:34.073558   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:34.123111   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:34.481685   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:34.573212   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:34.623618   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:34.982215   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:35.074038   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:35.122032   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:35.482200   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:35.574300   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:35.622766   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:35.982174   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:36.075346   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:36.123939   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:36.130579   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:36.481284   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:36.577674   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:36.621453   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:36.983294   13490 kapi.go:107] duration metric: took 1m40.50636054s to wait for kubernetes.io/minikube-addons=registry ...
	I0802 17:29:37.074183   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:37.122141   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:37.573344   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:37.621589   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.073272   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:38.121577   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.606322   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:38.624233   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.632919   13490 pod_ready.go:102] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"False"
	I0802 17:29:39.073901   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:39.122161   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:39.573032   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:39.622627   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.073992   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:40.121744   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.126454   13490 pod_ready.go:92] pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace has status "Ready":"True"
	I0802 17:29:40.126476   13490 pod_ready.go:81] duration metric: took 1m37.004425008s for pod "metrics-server-c59844bb4-gm5qb" in "kube-system" namespace to be "Ready" ...
	I0802 17:29:40.126488   13490 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-g2nk5" in "kube-system" namespace to be "Ready" ...
	I0802 17:29:40.132307   13490 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-g2nk5" in "kube-system" namespace has status "Ready":"True"
	I0802 17:29:40.132336   13490 pod_ready.go:81] duration metric: took 5.840827ms for pod "nvidia-device-plugin-daemonset-g2nk5" in "kube-system" namespace to be "Ready" ...
	I0802 17:29:40.132363   13490 pod_ready.go:38] duration metric: took 1m43.636754752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:29:40.132387   13490 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:29:40.132466   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 17:29:40.166225   13490 logs.go:276] 1 containers: [9d52a5ae4e94]
	I0802 17:29:40.166290   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 17:29:40.191072   13490 logs.go:276] 1 containers: [8644df599db2]
	I0802 17:29:40.191132   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 17:29:40.217744   13490 logs.go:276] 1 containers: [097b9119a9bd]
	I0802 17:29:40.217807   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 17:29:40.263730   13490 logs.go:276] 1 containers: [254be607cf1d]
	I0802 17:29:40.263810   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 17:29:40.284777   13490 logs.go:276] 1 containers: [a9d8d91be079]
	I0802 17:29:40.284861   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 17:29:40.316170   13490 logs.go:276] 1 containers: [3e94af1c2883]
	I0802 17:29:40.316263   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 17:29:40.354456   13490 logs.go:276] 0 containers: []
	W0802 17:29:40.354479   13490 logs.go:278] No container was found matching "kindnet"
	I0802 17:29:40.354489   13490 logs.go:123] Gathering logs for kubelet ...
	I0802 17:29:40.354502   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 17:29:40.437403   13490 logs.go:123] Gathering logs for kube-apiserver [9d52a5ae4e94] ...
	I0802 17:29:40.437438   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d52a5ae4e94"
	I0802 17:29:40.487598   13490 logs.go:123] Gathering logs for coredns [097b9119a9bd] ...
	I0802 17:29:40.487630   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097b9119a9bd"
	I0802 17:29:40.529253   13490 logs.go:123] Gathering logs for Docker ...
	I0802 17:29:40.529287   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 17:29:40.617292   13490 logs.go:123] Gathering logs for dmesg ...
	I0802 17:29:40.617324   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 17:29:40.625929   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.630513   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:40.640344   13490 logs.go:123] Gathering logs for describe nodes ...
	I0802 17:29:40.640368   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 17:29:40.864780   13490 logs.go:123] Gathering logs for etcd [8644df599db2] ...
	I0802 17:29:40.864807   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8644df599db2"
	I0802 17:29:40.922279   13490 logs.go:123] Gathering logs for kube-scheduler [254be607cf1d] ...
	I0802 17:29:40.922310   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254be607cf1d"
	I0802 17:29:40.956122   13490 logs.go:123] Gathering logs for kube-proxy [a9d8d91be079] ...
	I0802 17:29:40.956150   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9d8d91be079"
	I0802 17:29:40.977907   13490 logs.go:123] Gathering logs for kube-controller-manager [3e94af1c2883] ...
	I0802 17:29:40.977931   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e94af1c2883"
	I0802 17:29:41.030888   13490 logs.go:123] Gathering logs for container status ...
	I0802 17:29:41.030922   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 17:29:41.072994   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:41.121894   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:41.573845   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:41.621809   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:42.073930   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:42.121955   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:42.573119   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:42.621881   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.073785   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:43.123554   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.574130   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:43.606216   13490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:29:43.621545   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.636403   13490 api_server.go:72] duration metric: took 1m59.516890447s to wait for apiserver process to appear ...
	I0802 17:29:43.636429   13490 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:29:43.636487   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 17:29:43.660693   13490 logs.go:276] 1 containers: [9d52a5ae4e94]
	I0802 17:29:43.660770   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 17:29:43.686859   13490 logs.go:276] 1 containers: [8644df599db2]
	I0802 17:29:43.686927   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 17:29:43.714264   13490 logs.go:276] 1 containers: [097b9119a9bd]
	I0802 17:29:43.714343   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 17:29:43.738932   13490 logs.go:276] 1 containers: [254be607cf1d]
	I0802 17:29:43.739008   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 17:29:43.768218   13490 logs.go:276] 1 containers: [a9d8d91be079]
	I0802 17:29:43.768292   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 17:29:43.787107   13490 logs.go:276] 1 containers: [3e94af1c2883]
	I0802 17:29:43.787192   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 17:29:43.810482   13490 logs.go:276] 0 containers: []
	W0802 17:29:43.810512   13490 logs.go:278] No container was found matching "kindnet"
	I0802 17:29:43.810525   13490 logs.go:123] Gathering logs for kubelet ...
	I0802 17:29:43.810538   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 17:29:43.892048   13490 logs.go:123] Gathering logs for dmesg ...
	I0802 17:29:43.892082   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 17:29:43.912991   13490 logs.go:123] Gathering logs for kube-apiserver [9d52a5ae4e94] ...
	I0802 17:29:43.913019   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d52a5ae4e94"
	I0802 17:29:43.980497   13490 logs.go:123] Gathering logs for coredns [097b9119a9bd] ...
	I0802 17:29:43.980539   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097b9119a9bd"
	I0802 17:29:44.018234   13490 logs.go:123] Gathering logs for kube-scheduler [254be607cf1d] ...
	I0802 17:29:44.018264   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254be607cf1d"
	I0802 17:29:44.059857   13490 logs.go:123] Gathering logs for kube-proxy [a9d8d91be079] ...
	I0802 17:29:44.059895   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9d8d91be079"
	I0802 17:29:44.074093   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:44.091102   13490 logs.go:123] Gathering logs for kube-controller-manager [3e94af1c2883] ...
	I0802 17:29:44.091138   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e94af1c2883"
	I0802 17:29:44.122419   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:44.172607   13490 logs.go:123] Gathering logs for container status ...
	I0802 17:29:44.172639   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 17:29:44.229026   13490 logs.go:123] Gathering logs for describe nodes ...
	I0802 17:29:44.229057   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 17:29:44.606551   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:44.635631   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:44.681353   13490 logs.go:123] Gathering logs for etcd [8644df599db2] ...
	I0802 17:29:44.681379   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8644df599db2"
	I0802 17:29:44.778322   13490 logs.go:123] Gathering logs for Docker ...
	I0802 17:29:44.778367   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 17:29:45.074514   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:45.129080   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.574160   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:45.622213   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:46.073310   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:46.122271   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:46.584420   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:46.622368   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.075489   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:47.125652   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.352852   13490 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0802 17:29:47.357070   13490 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0802 17:29:47.357879   13490 api_server.go:141] control plane version: v1.30.3
	I0802 17:29:47.357901   13490 api_server.go:131] duration metric: took 3.721466297s to wait for apiserver health ...
	I0802 17:29:47.357908   13490 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:29:47.357968   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0802 17:29:47.388677   13490 logs.go:276] 1 containers: [9d52a5ae4e94]
	I0802 17:29:47.388759   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0802 17:29:47.419668   13490 logs.go:276] 1 containers: [8644df599db2]
	I0802 17:29:47.419740   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0802 17:29:47.451968   13490 logs.go:276] 1 containers: [097b9119a9bd]
	I0802 17:29:47.452043   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0802 17:29:47.491306   13490 logs.go:276] 1 containers: [254be607cf1d]
	I0802 17:29:47.491371   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0802 17:29:47.546679   13490 logs.go:276] 1 containers: [a9d8d91be079]
	I0802 17:29:47.546758   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0802 17:29:47.574375   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:47.600294   13490 logs.go:276] 1 containers: [3e94af1c2883]
	I0802 17:29:47.600359   13490 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0802 17:29:47.622011   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.624143   13490 logs.go:276] 0 containers: []
	W0802 17:29:47.624161   13490 logs.go:278] No container was found matching "kindnet"
	I0802 17:29:47.624171   13490 logs.go:123] Gathering logs for kube-apiserver [9d52a5ae4e94] ...
	I0802 17:29:47.624181   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d52a5ae4e94"
	I0802 17:29:47.677016   13490 logs.go:123] Gathering logs for etcd [8644df599db2] ...
	I0802 17:29:47.677047   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8644df599db2"
	I0802 17:29:47.723040   13490 logs.go:123] Gathering logs for Docker ...
	I0802 17:29:47.723068   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0802 17:29:47.799479   13490 logs.go:123] Gathering logs for container status ...
	I0802 17:29:47.799508   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 17:29:47.856193   13490 logs.go:123] Gathering logs for kube-controller-manager [3e94af1c2883] ...
	I0802 17:29:47.856219   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e94af1c2883"
	I0802 17:29:47.926903   13490 logs.go:123] Gathering logs for kubelet ...
	I0802 17:29:47.926941   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 17:29:48.020423   13490 logs.go:123] Gathering logs for dmesg ...
	I0802 17:29:48.020463   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 17:29:48.036831   13490 logs.go:123] Gathering logs for describe nodes ...
	I0802 17:29:48.036871   13490 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0802 17:29:48.073807   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:48.122202   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:48.239250   13490 logs.go:123] Gathering logs for coredns [097b9119a9bd] ...
	I0802 17:29:48.239287   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 097b9119a9bd"
	I0802 17:29:48.269458   13490 logs.go:123] Gathering logs for kube-scheduler [254be607cf1d] ...
	I0802 17:29:48.269483   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 254be607cf1d"
	I0802 17:29:48.310485   13490 logs.go:123] Gathering logs for kube-proxy [a9d8d91be079] ...
	I0802 17:29:48.310522   13490 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9d8d91be079"
	I0802 17:29:48.573105   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:48.622346   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:49.075914   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:49.133507   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:49.574779   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:49.621797   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.073389   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:50.121427   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.608223   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:50.621165   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.850578   13490 system_pods.go:59] 18 kube-system pods found
	I0802 17:29:50.850625   13490 system_pods.go:61] "coredns-7db6d8ff4d-8cscl" [8dda6696-3f82-491b-95b0-e55420e3bd63] Running
	I0802 17:29:50.850639   13490 system_pods.go:61] "csi-hostpath-attacher-0" [10fda67c-5a2e-40a5-83d3-6b4f6bfff4c2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0802 17:29:50.850651   13490 system_pods.go:61] "csi-hostpath-resizer-0" [2703d480-f129-4e0f-b644-2cd458f269fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0802 17:29:50.850661   13490 system_pods.go:61] "csi-hostpathplugin-lxvqd" [9b8072cc-3588-458c-83bb-202a696ed7cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0802 17:29:50.850668   13490 system_pods.go:61] "etcd-addons-723198" [1deaad82-cf48-41f3-843b-58afde311d83] Running
	I0802 17:29:50.850674   13490 system_pods.go:61] "kube-apiserver-addons-723198" [09e55315-cad3-40ba-aea1-da42e87c9e0e] Running
	I0802 17:29:50.850679   13490 system_pods.go:61] "kube-controller-manager-addons-723198" [edd1db85-efd7-479e-a8bb-d39e63190ea6] Running
	I0802 17:29:50.850689   13490 system_pods.go:61] "kube-ingress-dns-minikube" [324e75f9-02d5-4c77-a7d8-279864fe99b4] Running
	I0802 17:29:50.850694   13490 system_pods.go:61] "kube-proxy-b4wn5" [d0741818-0c24-49a8-9287-fb28de7043b8] Running
	I0802 17:29:50.850700   13490 system_pods.go:61] "kube-scheduler-addons-723198" [5efb14fa-51f6-45b1-a7df-3361584f2cc0] Running
	I0802 17:29:50.850704   13490 system_pods.go:61] "metrics-server-c59844bb4-gm5qb" [e54a0dd1-9f70-4238-9b98-b5c143ac6901] Running
	I0802 17:29:50.850710   13490 system_pods.go:61] "nvidia-device-plugin-daemonset-g2nk5" [45436a2b-b5fe-4f2a-aac8-f00a5be8dacf] Running
	I0802 17:29:50.850716   13490 system_pods.go:61] "registry-698f998955-rmkd9" [4d303fd6-afaa-4c57-8d55-3ec1c66f6415] Running
	I0802 17:29:50.850721   13490 system_pods.go:61] "registry-proxy-lnmnh" [1f0dd687-67d1-4e50-89e4-61b430552e7b] Running
	I0802 17:29:50.850726   13490 system_pods.go:61] "snapshot-controller-745499f584-f7zc5" [a0bdbb7e-02ba-4bd3-bc4c-a839c8824f46] Running
	I0802 17:29:50.850731   13490 system_pods.go:61] "snapshot-controller-745499f584-f9j5g" [37e81ade-8822-41b3-9f03-7ecfa30a89c7] Running
	I0802 17:29:50.850741   13490 system_pods.go:61] "storage-provisioner" [ed545444-77ee-4783-ae90-659a5c38ddc6] Running
	I0802 17:29:50.850746   13490 system_pods.go:61] "tiller-deploy-6677d64bcd-bqrdb" [a7638602-6f74-49ff-819b-f5b6644a9849] Running
	I0802 17:29:50.850753   13490 system_pods.go:74] duration metric: took 3.492838372s to wait for pod list to return data ...
	I0802 17:29:50.850764   13490 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:29:50.853085   13490 default_sa.go:45] found service account: "default"
	I0802 17:29:50.853115   13490 default_sa.go:55] duration metric: took 2.344465ms for default service account to be created ...
	I0802 17:29:50.853123   13490 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:29:50.861002   13490 system_pods.go:86] 18 kube-system pods found
	I0802 17:29:50.861029   13490 system_pods.go:89] "coredns-7db6d8ff4d-8cscl" [8dda6696-3f82-491b-95b0-e55420e3bd63] Running
	I0802 17:29:50.861039   13490 system_pods.go:89] "csi-hostpath-attacher-0" [10fda67c-5a2e-40a5-83d3-6b4f6bfff4c2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0802 17:29:50.861046   13490 system_pods.go:89] "csi-hostpath-resizer-0" [2703d480-f129-4e0f-b644-2cd458f269fe] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0802 17:29:50.861054   13490 system_pods.go:89] "csi-hostpathplugin-lxvqd" [9b8072cc-3588-458c-83bb-202a696ed7cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0802 17:29:50.861059   13490 system_pods.go:89] "etcd-addons-723198" [1deaad82-cf48-41f3-843b-58afde311d83] Running
	I0802 17:29:50.861064   13490 system_pods.go:89] "kube-apiserver-addons-723198" [09e55315-cad3-40ba-aea1-da42e87c9e0e] Running
	I0802 17:29:50.861068   13490 system_pods.go:89] "kube-controller-manager-addons-723198" [edd1db85-efd7-479e-a8bb-d39e63190ea6] Running
	I0802 17:29:50.861072   13490 system_pods.go:89] "kube-ingress-dns-minikube" [324e75f9-02d5-4c77-a7d8-279864fe99b4] Running
	I0802 17:29:50.861076   13490 system_pods.go:89] "kube-proxy-b4wn5" [d0741818-0c24-49a8-9287-fb28de7043b8] Running
	I0802 17:29:50.861080   13490 system_pods.go:89] "kube-scheduler-addons-723198" [5efb14fa-51f6-45b1-a7df-3361584f2cc0] Running
	I0802 17:29:50.861084   13490 system_pods.go:89] "metrics-server-c59844bb4-gm5qb" [e54a0dd1-9f70-4238-9b98-b5c143ac6901] Running
	I0802 17:29:50.861088   13490 system_pods.go:89] "nvidia-device-plugin-daemonset-g2nk5" [45436a2b-b5fe-4f2a-aac8-f00a5be8dacf] Running
	I0802 17:29:50.861092   13490 system_pods.go:89] "registry-698f998955-rmkd9" [4d303fd6-afaa-4c57-8d55-3ec1c66f6415] Running
	I0802 17:29:50.861096   13490 system_pods.go:89] "registry-proxy-lnmnh" [1f0dd687-67d1-4e50-89e4-61b430552e7b] Running
	I0802 17:29:50.861099   13490 system_pods.go:89] "snapshot-controller-745499f584-f7zc5" [a0bdbb7e-02ba-4bd3-bc4c-a839c8824f46] Running
	I0802 17:29:50.861107   13490 system_pods.go:89] "snapshot-controller-745499f584-f9j5g" [37e81ade-8822-41b3-9f03-7ecfa30a89c7] Running
	I0802 17:29:50.861111   13490 system_pods.go:89] "storage-provisioner" [ed545444-77ee-4783-ae90-659a5c38ddc6] Running
	I0802 17:29:50.861114   13490 system_pods.go:89] "tiller-deploy-6677d64bcd-bqrdb" [a7638602-6f74-49ff-819b-f5b6644a9849] Running
	I0802 17:29:50.861120   13490 system_pods.go:126] duration metric: took 7.992467ms to wait for k8s-apps to be running ...
	I0802 17:29:50.861127   13490 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:29:50.861166   13490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:29:50.878318   13490 system_svc.go:56] duration metric: took 17.181577ms WaitForService to wait for kubelet
	I0802 17:29:50.878354   13490 kubeadm.go:582] duration metric: took 2m6.758843091s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:29:50.878386   13490 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:29:50.881220   13490 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:29:50.881257   13490 node_conditions.go:123] node cpu capacity is 2
	I0802 17:29:50.881275   13490 node_conditions.go:105] duration metric: took 2.882202ms to run NodePressure ...
	I0802 17:29:50.881289   13490 start.go:241] waiting for startup goroutines ...
	I0802 17:29:51.074651   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:51.122362   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:51.573795   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:51.622070   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:52.074225   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:52.121889   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:52.573781   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:52.621915   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.073965   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:53.124153   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.575161   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:53.621757   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:54.074278   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:54.122175   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:54.574484   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:54.621486   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:55.073578   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:55.122287   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:55.584770   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:55.622173   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:56.077537   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:56.121977   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:56.574631   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:56.622585   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:57.073178   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:57.122153   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:57.573886   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:57.621992   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:58.180127   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:58.184663   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:58.574512   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:58.621709   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:59.073336   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:59.121224   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:59.575088   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:59.692152   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:00.074281   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:00.121253   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:00.574506   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:00.621679   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:01.075110   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:01.123547   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:01.575019   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:01.622188   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:02.076194   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:02.121663   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:02.575949   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:02.621965   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:03.073421   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:03.123947   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:03.579098   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:03.622846   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:04.078024   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:04.122410   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:04.575737   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:04.623490   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:05.075584   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:05.122759   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:05.573965   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:05.622171   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:06.074066   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:06.130017   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:06.575146   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:06.630048   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:07.074065   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:07.122288   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:07.573913   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:07.622441   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:08.073930   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:08.122424   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:08.624589   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:08.625678   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.075241   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.124714   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:09.575348   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.621569   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:10.074026   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:10.122264   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:10.574703   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:10.621694   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:11.073803   13490 kapi.go:107] duration metric: took 2m13.505657833s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0802 17:30:11.121724   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:11.621381   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:12.122155   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:12.622415   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:13.123056   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:13.621858   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:14.121640   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:14.622429   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:15.122405   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:15.621701   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:16.121624   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:16.621886   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:17.325007   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:17.623054   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:18.120999   13490 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:18.623150   13490 kapi.go:107] duration metric: took 2m24.50575849s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0802 17:30:43.239129   13490 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0802 17:30:43.239160   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:43.738333   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:44.240783   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:44.738428   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:45.239207   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:45.739526   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:46.249108   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:46.738760   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:47.242401   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:47.738630   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:48.239071   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:48.738464   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:49.238095   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:49.740815   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:50.246280   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:50.738597   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:51.238827   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:51.739043   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:52.239095   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:52.738411   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:53.239072   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:53.738842   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:54.243577   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:54.738198   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:55.238497   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:55.738946   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:56.239540   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:56.738555   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:57.239273   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:57.738998   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:58.239664   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:58.739076   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:59.239144   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:59.738252   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:00.244105   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:00.738649   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:01.239072   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:01.738282   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:02.242479   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:02.738442   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:03.240584   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:03.738194   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:04.238356   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:04.739968   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:05.239460   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:05.739138   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:06.241262   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:06.738976   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:07.238852   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:07.739365   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:08.242003   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:08.738604   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:09.238889   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:09.738761   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:10.244168   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:10.738890   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:11.238606   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:11.739109   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:12.238809   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:12.739203   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:13.238879   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:13.739391   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:14.240962   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:14.739306   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:15.239963   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:15.738549   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:16.239663   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:16.738932   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:17.239250   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:17.738687   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:18.241129   13490 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:31:18.738352   13490 kapi.go:107] duration metric: took 3m19.503305282s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0802 17:31:18.739932   13490 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-723198 cluster.
	I0802 17:31:18.741294   13490 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0802 17:31:18.742521   13490 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0802 17:31:18.743915   13490 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, volcano, nvidia-device-plugin, storage-provisioner, metrics-server, ingress-dns, helm-tiller, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0802 17:31:18.745140   13490 addons.go:510] duration metric: took 3m34.62559673s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher volcano nvidia-device-plugin storage-provisioner metrics-server ingress-dns helm-tiller inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0802 17:31:18.745176   13490 start.go:246] waiting for cluster config update ...
	I0802 17:31:18.745193   13490 start.go:255] writing updated cluster config ...
	I0802 17:31:18.745432   13490 ssh_runner.go:195] Run: rm -f paused
	I0802 17:31:18.798077   13490 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 17:31:18.799661   13490 out.go:177] * Done! kubectl is now configured to use "addons-723198" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 02 17:32:38 addons-723198 dockerd[1200]: time="2024-08-02T17:32:38.418027343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 02 17:32:38 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:32:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83996197e2499d25af26b4b24959c7bdbbf4897e13f0645272babffb9263db4a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 02 17:32:42 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:32:42Z" level=info msg="Pulling image gcr.io/k8s-minikube/busybox:latest: 5cc84ad355aa: Pulling fs layer "
	Aug 02 17:32:52 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:32:52Z" level=info msg="Pulling image gcr.io/k8s-minikube/busybox:latest: 5cc84ad355aa: Pulling fs layer "
	Aug 02 17:33:02 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:33:02Z" level=info msg="Pulling image gcr.io/k8s-minikube/busybox:latest: 5cc84ad355aa: Pulling fs layer "
	Aug 02 17:33:12 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:33:12Z" level=info msg="Pulling image gcr.io/k8s-minikube/busybox:latest: 5cc84ad355aa: Pulling fs layer "
	Aug 02 17:33:22 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:33:22Z" level=info msg="Pulling image gcr.io/k8s-minikube/busybox:latest: 5cc84ad355aa: Pulling fs layer "
	Aug 02 17:33:31 addons-723198 dockerd[1194]: time="2024-08-02T17:33:31.457395893Z" level=info msg="ignoring event" container=cf145e49f3857ec420f05e4ee803cdeacb7ab1ae2fa1ee49e12d2e0c76776c1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.457779477Z" level=info msg="shim disconnected" id=cf145e49f3857ec420f05e4ee803cdeacb7ab1ae2fa1ee49e12d2e0c76776c1f namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.458341415Z" level=warning msg="cleaning up after shim disconnected" id=cf145e49f3857ec420f05e4ee803cdeacb7ab1ae2fa1ee49e12d2e0c76776c1f namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.458376854Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1194]: time="2024-08-02T17:33:31.482113399Z" level=info msg="ignoring event" container=2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.482184234Z" level=info msg="shim disconnected" id=2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41 namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.482312564Z" level=warning msg="cleaning up after shim disconnected" id=2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41 namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.482327159Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:33:31 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:33:31Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-698f998955-rmkd9_kube-system\": unexpected command output nsenter: cannot open /proc/3329/ns/net: No such file or directory\n with error: exit status 1"
	Aug 02 17:33:31 addons-723198 dockerd[1194]: time="2024-08-02T17:33:31.705373362Z" level=info msg="ignoring event" container=cf8149a882997e2801696e21979c70b431dd081c6c326e6af91452d45e660373 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.707776465Z" level=info msg="shim disconnected" id=cf8149a882997e2801696e21979c70b431dd081c6c326e6af91452d45e660373 namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.708076171Z" level=warning msg="cleaning up after shim disconnected" id=cf8149a882997e2801696e21979c70b431dd081c6c326e6af91452d45e660373 namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.708108528Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1194]: time="2024-08-02T17:33:31.867297337Z" level=info msg="ignoring event" container=0b2b2afdf4f5376a0197c4319c33598b56ce9954fa64d1fe38ca3712367263e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.866785542Z" level=info msg="shim disconnected" id=0b2b2afdf4f5376a0197c4319c33598b56ce9954fa64d1fe38ca3712367263e6 namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.869005248Z" level=warning msg="cleaning up after shim disconnected" id=0b2b2afdf4f5376a0197c4319c33598b56ce9954fa64d1fe38ca3712367263e6 namespace=moby
	Aug 02 17:33:31 addons-723198 dockerd[1200]: time="2024-08-02T17:33:31.869273139Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 02 17:33:32 addons-723198 cri-dockerd[1093]: time="2024-08-02T17:33:32Z" level=info msg="Pulling image gcr.io/k8s-minikube/busybox:latest: 5cc84ad355aa: Pulling fs layer "
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c855120c6c5bc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   cb77b032f3536       busybox
	9b382145899a6       registry.k8s.io/ingress-nginx/controller@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a                             3 minutes ago        Running             controller                               0                   8ca7671350353       ingress-nginx-controller-6d9bd977d4-vjzjl
	7e2da7b3fca02       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago        Running             csi-snapshotter                          0                   7750a49ead5c6       csi-hostpathplugin-lxvqd
	2c084892e7e9d       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          3 minutes ago        Running             csi-provisioner                          0                   7750a49ead5c6       csi-hostpathplugin-lxvqd
	d1dd7c125f19c       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            3 minutes ago        Running             liveness-probe                           0                   7750a49ead5c6       csi-hostpathplugin-lxvqd
	80587d5a6d5e1       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           3 minutes ago        Running             hostpath                                 0                   7750a49ead5c6       csi-hostpathplugin-lxvqd
	9e5c159c85254       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                3 minutes ago        Running             node-driver-registrar                    0                   7750a49ead5c6       csi-hostpathplugin-lxvqd
	a1087e5b3bef5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   3 minutes ago        Running             csi-external-health-monitor-controller   0                   7750a49ead5c6       csi-hostpathplugin-lxvqd
	999efcdcb0963       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              3 minutes ago        Running             csi-resizer                              0                   8ea347e9a8cac       csi-hostpath-resizer-0
	d2fc3f89362af       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             3 minutes ago        Running             csi-attacher                             0                   7ecce34164dc1       csi-hostpath-attacher-0
	9fffcb3ae3ed6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   3 minutes ago        Exited              patch                                    0                   cec1a9a587965       ingress-nginx-admission-patch-ts22b
	dce6c1c6a11cb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   3 minutes ago        Exited              create                                   0                   3937eb8d7b717       ingress-nginx-admission-create-d477w
	df20f4925f0b8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      3 minutes ago        Running             volume-snapshot-controller               0                   b53d126199363       snapshot-controller-745499f584-f7zc5
	70f6592a962ce       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      3 minutes ago        Running             volume-snapshot-controller               0                   eea63e8d2d8cc       snapshot-controller-745499f584-f9j5g
	7cd043d826156       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       3 minutes ago        Running             local-path-provisioner                   0                   2ae52600c49c7       local-path-provisioner-8d985888d-nzbsw
	2f0ac0d6ba5b8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              3 minutes ago        Exited              registry-proxy                           0                   0b2b2afdf4f53       registry-proxy-lnmnh
	8f17e99c7e835       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        4 minutes ago        Running             yakd                                     0                   6925f672dd185       yakd-dashboard-799879c74f-rkj2g
	cf145e49f3857       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                                             4 minutes ago        Exited              registry                                 0                   cf8149a882997       registry-698f998955-rmkd9
	8ee885c4a1c07       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             4 minutes ago        Running             minikube-ingress-dns                     0                   57006529b60db       kube-ingress-dns-minikube
	5c8593ba936ae       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  5 minutes ago        Running             tiller                                   0                   bbb35f8108029       tiller-deploy-6677d64bcd-bqrdb
	48a37af89feac       gcr.io/cloud-spanner-emulator/emulator@sha256:ea3a9e70a98bf648952401e964c5403d93e980837acf924288df19e0077ae7fb                               5 minutes ago        Running             cloud-spanner-emulator                   0                   e1bb7102f5a26       cloud-spanner-emulator-5455fb9b69-26pff
	477611d007134       nvcr.io/nvidia/k8s-device-plugin@sha256:89612c7851300ddeed218b9df0dcb33bbb8495282aa17c554038e52387ce7f1e                                     5 minutes ago        Running             nvidia-device-plugin-ctr                 0                   b16a9279d058d       nvidia-device-plugin-daemonset-g2nk5
	9529af40bf73a       6e38f40d628db                                                                                                                                5 minutes ago        Running             storage-provisioner                      0                   29bcb751234c1       storage-provisioner
	097b9119a9bdb       cbb01a7bd410d                                                                                                                                5 minutes ago        Running             coredns                                  0                   e700518d7cd23       coredns-7db6d8ff4d-8cscl
	a9d8d91be0798       55bb025d2cfa5                                                                                                                                5 minutes ago        Running             kube-proxy                               0                   b0c0c27875834       kube-proxy-b4wn5
	254be607cf1d4       3edc18e7b7672                                                                                                                                6 minutes ago        Running             kube-scheduler                           0                   5ce2e1e2ed7d0       kube-scheduler-addons-723198
	9d52a5ae4e942       1f6d574d502f3                                                                                                                                6 minutes ago        Running             kube-apiserver                           0                   3dcc8ad379bb9       kube-apiserver-addons-723198
	8644df599db2e       3861cfcd7c04c                                                                                                                                6 minutes ago        Running             etcd                                     0                   2a043195a806a       etcd-addons-723198
	3e94af1c28832       76932a3b37d7e                                                                                                                                6 minutes ago        Running             kube-controller-manager                  0                   c08343d6553e5       kube-controller-manager-addons-723198
	
	
	==> controller_ingress [9b382145899a] <==
	I0802 17:30:18.101639       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"67e2a791-e2e4-405f-9425-5e46abd3c322", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0802 17:30:19.286710       8 nginx.go:317] "Starting NGINX process"
	I0802 17:30:19.286818       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0802 17:30:19.287219       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0802 17:30:19.287405       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0802 17:30:19.314278       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0802 17:30:19.314467       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-6d9bd977d4-vjzjl"
	I0802 17:30:19.319204       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6d9bd977d4-vjzjl" node="addons-723198"
	I0802 17:30:19.355079       8 controller.go:213] "Backend successfully reloaded"
	I0802 17:30:19.355153       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0802 17:30:19.355427       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-vjzjl", UID:"aa3e378d-22e0-44bb-a741-a59dc012bc1d", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0802 17:32:31.790767       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0802 17:32:31.814943       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.024s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.024s testedConfigurationSize:18.1kB}
	I0802 17:32:31.815000       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0802 17:32:31.823591       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0802 17:32:31.824613       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"3574d3d6-b1b6-4bef-b8fd-da836cc054fe", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0802 17:32:33.075988       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0802 17:32:33.076474       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0802 17:32:33.114727       8 controller.go:213] "Backend successfully reloaded"
	I0802 17:32:33.116049       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-vjzjl", UID:"aa3e378d-22e0-44bb-a741-a59dc012bc1d", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0802 17:32:36.410606       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0802 17:33:19.330433       8 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.39.195"}]
	W0802 17:33:19.336265       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0802 17:33:19.336603       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"3574d3d6-b1b6-4bef-b8fd-da836cc054fe", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2075", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0802 17:33:31.355320       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [097b9119a9bd] <==
	[INFO] Reloading complete
	[INFO] 10.244.0.9:35387 - 7437 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000991038s
	[INFO] 10.244.0.9:35387 - 38961 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095607s
	[INFO] 10.244.0.9:48416 - 40549 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066047s
	[INFO] 10.244.0.9:48416 - 62566 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000053814s
	[INFO] 10.244.0.9:60657 - 64011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007759s
	[INFO] 10.244.0.9:60657 - 18953 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066097s
	[INFO] 10.244.0.9:39328 - 47529 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134658s
	[INFO] 10.244.0.9:39328 - 22696 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098431s
	[INFO] 10.244.0.9:39748 - 20925 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007929s
	[INFO] 10.244.0.9:39748 - 47544 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075718s
	[INFO] 10.244.0.9:50617 - 18733 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040445s
	[INFO] 10.244.0.9:50617 - 52526 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079224s
	[INFO] 10.244.0.9:57131 - 10186 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062356s
	[INFO] 10.244.0.9:57131 - 51400 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075288s
	[INFO] 10.244.0.9:52858 - 17565 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057711s
	[INFO] 10.244.0.9:52858 - 13459 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081985s
	[INFO] 10.244.0.26:36718 - 25479 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000437571s
	[INFO] 10.244.0.26:43088 - 55729 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000150604s
	[INFO] 10.244.0.26:58981 - 50900 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000370377s
	[INFO] 10.244.0.26:35367 - 65248 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000184078s
	[INFO] 10.244.0.26:38373 - 50675 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009416s
	[INFO] 10.244.0.26:59934 - 42450 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119973s
	[INFO] 10.244.0.26:48016 - 54724 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000441661s
	[INFO] 10.244.0.26:60417 - 36060 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001099002s
	
	
	==> describe nodes <==
	Name:               addons-723198
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-723198
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=addons-723198
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-723198
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-723198"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-723198
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:33:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:32:37 +0000   Fri, 02 Aug 2024 17:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:32:37 +0000   Fri, 02 Aug 2024 17:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:32:37 +0000   Fri, 02 Aug 2024 17:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:32:37 +0000   Fri, 02 Aug 2024 17:27:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-723198
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad8e444a6a1a44829925e3cb0dd4dbd1
	  System UUID:                ad8e444a-6a1a-4482-9925-e3cb0dd4dbd1
	  Boot ID:                    2d558b4f-05ca-45af-8943-0f9b12fb3235
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  default                     cloud-spanner-emulator-5455fb9b69-26pff      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  default                     registry-test                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-vjzjl    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         5m39s
	  kube-system                 coredns-7db6d8ff4d-8cscl                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m49s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 csi-hostpathplugin-lxvqd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 etcd-addons-723198                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 helm-test                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-apiserver-addons-723198                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-controller-manager-addons-723198        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-proxy-b4wn5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-scheduler-addons-723198                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 nvidia-device-plugin-daemonset-g2nk5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  kube-system                 snapshot-controller-745499f584-f7zc5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 snapshot-controller-745499f584-f9j5g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 tiller-deploy-6677d64bcd-bqrdb               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  local-path-storage          local-path-provisioner-8d985888d-nzbsw       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  yakd-dashboard              yakd-dashboard-799879c74f-rkj2g              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m47s                kube-proxy       
	  Normal  Starting                 6m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m8s (x4 over 6m8s)  kubelet          Node addons-723198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x4 over 6m8s)  kubelet          Node addons-723198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x4 over 6m8s)  kubelet          Node addons-723198 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m2s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m2s                 kubelet          Node addons-723198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s                 kubelet          Node addons-723198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s                 kubelet          Node addons-723198 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m1s                 kubelet          Node addons-723198 status is now: NodeReady
	  Normal  RegisteredNode           5m50s                node-controller  Node addons-723198 event: Registered Node addons-723198 in Controller
	
	
	==> dmesg <==
	[  +4.999951] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.294179] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.379147] kauditd_printk_skb: 110 callbacks suppressed
	[Aug 2 17:28] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.574575] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.267939] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 2 17:29] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.781310] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.036265] kauditd_printk_skb: 10 callbacks suppressed
	[Aug 2 17:30] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.580197] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.699267] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.107780] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.471310] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 2 17:31] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.697467] kauditd_printk_skb: 40 callbacks suppressed
	[ +15.244655] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.943836] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.899675] kauditd_printk_skb: 2 callbacks suppressed
	[Aug 2 17:32] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.513558] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.824625] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.195845] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.054539] kauditd_printk_skb: 33 callbacks suppressed
	[Aug 2 17:33] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [8644df599db2] <==
	{"level":"warn","ts":"2024-08-02T17:29:24.762138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.984632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-08-02T17:29:24.762205Z","caller":"traceutil/trace.go:171","msg":"trace[664579246] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1133; }","duration":"166.130725ms","start":"2024-08-02T17:29:24.596067Z","end":"2024-08-02T17:29:24.762197Z","steps":["trace[664579246] 'agreement among raft nodes before linearized reading'  (duration: 165.757179ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:29:24.762515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.510378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-gm5qb\" ","response":"range_response_count:1 size:4224"}
	{"level":"info","ts":"2024-08-02T17:29:24.762553Z","caller":"traceutil/trace.go:171","msg":"trace[435111341] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-gm5qb; range_end:; response_count:1; response_revision:1133; }","duration":"161.569269ms","start":"2024-08-02T17:29:24.600976Z","end":"2024-08-02T17:29:24.762546Z","steps":["trace[435111341] 'agreement among raft nodes before linearized reading'  (duration: 161.483791ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:29:43.453249Z","caller":"traceutil/trace.go:171","msg":"trace[1183069591] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"238.62669ms","start":"2024-08-02T17:29:43.214587Z","end":"2024-08-02T17:29:43.453214Z","steps":["trace[1183069591] 'process raft request'  (duration: 238.466052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:29:44.577688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.175274ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:1 size:5002"}
	{"level":"info","ts":"2024-08-02T17:29:44.57793Z","caller":"traceutil/trace.go:171","msg":"trace[2137799045] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:1178; }","duration":"214.912405ms","start":"2024-08-02T17:29:44.362996Z","end":"2024-08-02T17:29:44.577909Z","steps":["trace[2137799045] 'range keys from in-memory index tree'  (duration: 214.066228ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:29:58.154539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.17874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:86246"}
	{"level":"info","ts":"2024-08-02T17:29:58.155154Z","caller":"traceutil/trace.go:171","msg":"trace[521451317] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1259; }","duration":"109.493897ms","start":"2024-08-02T17:29:58.045313Z","end":"2024-08-02T17:29:58.154807Z","steps":["trace[521451317] 'range keys from in-memory index tree'  (duration: 108.898992ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:30:13.461455Z","caller":"traceutil/trace.go:171","msg":"trace[1876489739] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1391; }","duration":"249.755944ms","start":"2024-08-02T17:30:13.211672Z","end":"2024-08-02T17:30:13.461428Z","steps":["trace[1876489739] 'read index received'  (duration: 249.573191ms)","trace[1876489739] 'applied index is now lower than readState.Index'  (duration: 182.304µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T17:30:13.461744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.966216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T17:30:13.461815Z","caller":"traceutil/trace.go:171","msg":"trace[417419214] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1339; }","duration":"250.172583ms","start":"2024-08-02T17:30:13.211628Z","end":"2024-08-02T17:30:13.4618Z","steps":["trace[417419214] 'agreement among raft nodes before linearized reading'  (duration: 249.970628ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:30:13.462076Z","caller":"traceutil/trace.go:171","msg":"trace[1801370662] transaction","detail":"{read_only:false; response_revision:1339; number_of_response:1; }","duration":"362.222193ms","start":"2024-08-02T17:30:13.09984Z","end":"2024-08-02T17:30:13.462062Z","steps":["trace[1801370662] 'process raft request'  (duration: 361.462773ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:13.463932Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:13.099827Z","time spent":"363.013689ms","remote":"127.0.0.1:58444","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1336 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-02T17:30:17.29444Z","caller":"traceutil/trace.go:171","msg":"trace[1747400129] linearizableReadLoop","detail":"{readStateIndex:1397; appliedIndex:1396; }","duration":"200.181788ms","start":"2024-08-02T17:30:17.094244Z","end":"2024-08-02T17:30:17.294426Z","steps":["trace[1747400129] 'read index received'  (duration: 200.045523ms)","trace[1747400129] 'applied index is now lower than readState.Index'  (duration: 135.868µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T17:30:17.294933Z","caller":"traceutil/trace.go:171","msg":"trace[339032286] transaction","detail":"{read_only:false; response_revision:1343; number_of_response:1; }","duration":"379.962323ms","start":"2024-08-02T17:30:16.914957Z","end":"2024-08-02T17:30:17.294919Z","steps":["trace[339032286] 'process raft request'  (duration: 379.378689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:17.295124Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:16.914943Z","time spent":"380.089916ms","remote":"127.0.0.1:58534","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1337 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-08-02T17:30:17.295578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.312296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14521"}
	{"level":"info","ts":"2024-08-02T17:30:17.296184Z","caller":"traceutil/trace.go:171","msg":"trace[347869621] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1343; }","duration":"201.960666ms","start":"2024-08-02T17:30:17.09421Z","end":"2024-08-02T17:30:17.29617Z","steps":["trace[347869621] 'agreement among raft nodes before linearized reading'  (duration: 201.271504ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:31:42.732221Z","caller":"traceutil/trace.go:171","msg":"trace[359799218] linearizableReadLoop","detail":"{readStateIndex:1700; appliedIndex:1699; }","duration":"240.762329ms","start":"2024-08-02T17:31:42.491415Z","end":"2024-08-02T17:31:42.732177Z","steps":["trace[359799218] 'read index received'  (duration: 240.622052ms)","trace[359799218] 'applied index is now lower than readState.Index'  (duration: 139.784µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T17:31:42.732398Z","caller":"traceutil/trace.go:171","msg":"trace[932431741] transaction","detail":"{read_only:false; response_revision:1627; number_of_response:1; }","duration":"244.996358ms","start":"2024-08-02T17:31:42.487393Z","end":"2024-08-02T17:31:42.732389Z","steps":["trace[932431741] 'process raft request'  (duration: 244.693637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:31:42.7326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.082636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"warn","ts":"2024-08-02T17:31:42.732641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.217678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-08-02T17:31:42.732654Z","caller":"traceutil/trace.go:171","msg":"trace[359099506] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1627; }","duration":"237.194244ms","start":"2024-08-02T17:31:42.495445Z","end":"2024-08-02T17:31:42.732639Z","steps":["trace[359099506] 'agreement among raft nodes before linearized reading'  (duration: 237.014969ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:31:42.732662Z","caller":"traceutil/trace.go:171","msg":"trace[517501709] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1627; }","duration":"241.267294ms","start":"2024-08-02T17:31:42.491389Z","end":"2024-08-02T17:31:42.732656Z","steps":["trace[517501709] 'agreement among raft nodes before linearized reading'  (duration: 241.197349ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:33:32 up 6 min,  0 users,  load average: 0.83, 0.97, 0.56
	Linux addons-723198 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9d52a5ae4e94] <==
	I0802 17:31:35.111425       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0802 17:31:35.137922       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0802 17:31:51.752666       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0802 17:31:51.815639       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0802 17:31:52.436697       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0802 17:31:52.559412       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0802 17:31:52.679138       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0802 17:31:52.679835       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0802 17:31:52.943626       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0802 17:31:52.958217       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0802 17:31:53.072013       1 cacher.go:168] Terminating all watchers from cacher commands.bus.volcano.sh
	I0802 17:31:53.092642       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0802 17:31:53.677042       1 cacher.go:168] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0802 17:31:53.680592       1 cacher.go:168] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0802 17:31:53.735345       1 cacher.go:168] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0802 17:31:53.766653       1 cacher.go:168] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0802 17:31:54.093331       1 cacher.go:168] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0802 17:31:54.435425       1 cacher.go:168] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0802 17:32:10.704292       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:43070: use of closed network connection
	E0802 17:32:10.907485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:43092: use of closed network connection
	I0802 17:32:26.184944       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0802 17:32:27.232907       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0802 17:32:31.816004       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0802 17:32:32.024935       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.85.8"}
	I0802 17:32:40.880117       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [3e94af1c2883] <==
	W0802 17:32:36.475339       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:36.475569       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0802 17:32:43.385956       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0802 17:32:43.386074       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 17:32:43.636881       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0802 17:32:43.636932       1 shared_informer.go:320] Caches are synced for garbage collector
	W0802 17:32:45.582186       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:45.582333       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:32:56.298417       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:56.298450       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:32:57.988357       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:57.988406       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:01.314158       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:01.314260       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:07.037583       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:07.037836       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:07.996438       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:07.996486       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:10.253969       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:10.254024       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:14.977292       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:14.977627       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:15.163301       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:15.163534       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0802 17:33:31.345687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-698f998955" duration="8.651µs"
	
	
	==> kube-proxy [a9d8d91be079] <==
	I0802 17:27:45.073007       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:27:45.093178       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	I0802 17:27:45.206006       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:27:45.206045       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:27:45.206060       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:27:45.210220       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:27:45.210399       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:27:45.210409       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:27:45.211821       1 config.go:192] "Starting service config controller"
	I0802 17:27:45.211834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:27:45.214447       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:27:45.214462       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:27:45.223341       1 config.go:319] "Starting node config controller"
	I0802 17:27:45.223354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:27:45.315167       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0802 17:27:45.315213       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:27:45.324998       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [254be607cf1d] <==
	W0802 17:27:27.619209       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 17:27:27.619308       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 17:27:28.424719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 17:27:28.424751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 17:27:28.502074       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 17:27:28.502276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 17:27:28.571439       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 17:27:28.571482       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 17:27:28.619477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 17:27:28.619717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 17:27:28.697319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 17:27:28.697549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 17:27:28.749470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 17:27:28.749514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 17:27:28.749914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 17:27:28.749946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 17:27:28.770463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 17:27:28.770631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 17:27:28.800120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 17:27:28.800309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 17:27:28.892145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 17:27:28.892649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 17:27:28.916822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:27:28.917095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0802 17:27:30.611330       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 17:32:32 addons-723198 kubelet[2022]: I0802 17:32:32.508286    2022 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae3acd65ee1d072d0cf4d14fe091aa88ef5a89ad9390cb6f6f7e07ec46f4c579"
	Aug 02 17:32:38 addons-723198 kubelet[2022]: I0802 17:32:38.005574    2022 topology_manager.go:215] "Topology Admit Handler" podUID="622a71ce-4975-425a-8caf-76e266f8394d" podNamespace="default" podName="task-pv-pod"
	Aug 02 17:32:38 addons-723198 kubelet[2022]: I0802 17:32:38.085345    2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfjbj\" (UniqueName: \"kubernetes.io/projected/622a71ce-4975-425a-8caf-76e266f8394d-kube-api-access-lfjbj\") pod \"task-pv-pod\" (UID: \"622a71ce-4975-425a-8caf-76e266f8394d\") " pod="default/task-pv-pod"
	Aug 02 17:32:38 addons-723198 kubelet[2022]: I0802 17:32:38.085555    2022 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50b9d8e5-380a-4280-b083-0a46fd37d5d4\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3615fd61-50f5-11ef-a085-7a68dbf85ddd\") pod \"task-pv-pod\" (UID: \"622a71ce-4975-425a-8caf-76e266f8394d\") " pod="default/task-pv-pod"
	Aug 02 17:32:38 addons-723198 kubelet[2022]: I0802 17:32:38.203607    2022 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-50b9d8e5-380a-4280-b083-0a46fd37d5d4\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3615fd61-50f5-11ef-a085-7a68dbf85ddd\") pod \"task-pv-pod\" (UID: \"622a71ce-4975-425a-8caf-76e266f8394d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/6e372eb712a655d8ca80c1d5132515106fe498fcccf95d1d8951a457936257b6/globalmount\"" pod="default/task-pv-pod"
	Aug 02 17:33:03 addons-723198 kubelet[2022]: I0802 17:33:03.209769    2022 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-lnmnh" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:33:14 addons-723198 kubelet[2022]: I0802 17:33:14.210197    2022 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-g2nk5" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:33:25 addons-723198 kubelet[2022]: I0802 17:33:25.210237    2022 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:33:28 addons-723198 kubelet[2022]: I0802 17:33:28.210288    2022 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5455fb9b69-26pff" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:33:30 addons-723198 kubelet[2022]: E0802 17:33:30.246215    2022 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:33:30 addons-723198 kubelet[2022]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:33:30 addons-723198 kubelet[2022]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:33:30 addons-723198 kubelet[2022]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:33:30 addons-723198 kubelet[2022]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:33:31 addons-723198 kubelet[2022]: I0802 17:33:31.828644    2022 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd7dc\" (UniqueName: \"kubernetes.io/projected/4d303fd6-afaa-4c57-8d55-3ec1c66f6415-kube-api-access-vd7dc\") pod \"4d303fd6-afaa-4c57-8d55-3ec1c66f6415\" (UID: \"4d303fd6-afaa-4c57-8d55-3ec1c66f6415\") "
	Aug 02 17:33:31 addons-723198 kubelet[2022]: I0802 17:33:31.832424    2022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d303fd6-afaa-4c57-8d55-3ec1c66f6415-kube-api-access-vd7dc" (OuterVolumeSpecName: "kube-api-access-vd7dc") pod "4d303fd6-afaa-4c57-8d55-3ec1c66f6415" (UID: "4d303fd6-afaa-4c57-8d55-3ec1c66f6415"). InnerVolumeSpecName "kube-api-access-vd7dc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 02 17:33:31 addons-723198 kubelet[2022]: I0802 17:33:31.929596    2022 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vd7dc\" (UniqueName: \"kubernetes.io/projected/4d303fd6-afaa-4c57-8d55-3ec1c66f6415-kube-api-access-vd7dc\") on node \"addons-723198\" DevicePath \"\""
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.031099    2022 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fc6t\" (UniqueName: \"kubernetes.io/projected/1f0dd687-67d1-4e50-89e4-61b430552e7b-kube-api-access-6fc6t\") pod \"1f0dd687-67d1-4e50-89e4-61b430552e7b\" (UID: \"1f0dd687-67d1-4e50-89e4-61b430552e7b\") "
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.033949    2022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0dd687-67d1-4e50-89e4-61b430552e7b-kube-api-access-6fc6t" (OuterVolumeSpecName: "kube-api-access-6fc6t") pod "1f0dd687-67d1-4e50-89e4-61b430552e7b" (UID: "1f0dd687-67d1-4e50-89e4-61b430552e7b"). InnerVolumeSpecName "kube-api-access-6fc6t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.132441    2022 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6fc6t\" (UniqueName: \"kubernetes.io/projected/1f0dd687-67d1-4e50-89e4-61b430552e7b-kube-api-access-6fc6t\") on node \"addons-723198\" DevicePath \"\""
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.736093    2022 scope.go:117] "RemoveContainer" containerID="2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41"
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.802198    2022 scope.go:117] "RemoveContainer" containerID="2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41"
	Aug 02 17:33:32 addons-723198 kubelet[2022]: E0802 17:33:32.827163    2022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41" containerID="2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41"
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.827743    2022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41"} err="failed to get container status \"2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2f0ac0d6ba5b8bc7cc5dffc0ac6f67073f00947d3fe3fe81e140858b24225a41"
	Aug 02 17:33:32 addons-723198 kubelet[2022]: I0802 17:33:32.827881    2022 scope.go:117] "RemoveContainer" containerID="cf145e49f3857ec420f05e4ee803cdeacb7ab1ae2fa1ee49e12d2e0c76776c1f"
	
	
	==> storage-provisioner [9529af40bf73] <==
	I0802 17:27:53.662122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 17:27:53.677434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 17:27:53.678536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 17:27:53.759371       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 17:27:53.759506       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-723198_506f405d-7e02-404d-a5c6-785638f23fa6!
	I0802 17:27:53.760456       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd85db3b-37f1-4ab5-b263-38cf0527abe3", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-723198_506f405d-7e02-404d-a5c6-785638f23fa6 became leader
	I0802 17:27:53.859640       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-723198_506f405d-7e02-404d-a5c6-785638f23fa6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-723198 -n addons-723198
helpers_test.go:261: (dbg) Run:  kubectl --context addons-723198 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx registry-test task-pv-pod ingress-nginx-admission-create-d477w ingress-nginx-admission-patch-ts22b helm-test
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-723198 describe pod nginx registry-test task-pv-pod ingress-nginx-admission-create-d477w ingress-nginx-admission-patch-ts22b helm-test
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-723198 describe pod nginx registry-test task-pv-pod ingress-nginx-admission-create-d477w ingress-nginx-admission-patch-ts22b helm-test: exit status 1 (94.210166ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-723198/192.168.39.195
	Start Time:       Fri, 02 Aug 2024 17:32:31 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dg224 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dg224:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  62s   default-scheduler  Successfully assigned default/nginx to addons-723198
	  Normal  Pulling    61s   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-723198/192.168.39.195
	Start Time:                Fri, 02 Aug 2024 17:32:30 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        
	IPs:                       <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sn8bh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sn8bh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  63s   default-scheduler  Successfully assigned default/registry-test to addons-723198
	  Normal  Pulling    62s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-723198/192.168.39.195
	Start Time:       Fri, 02 Aug 2024 17:32:38 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lfjbj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-lfjbj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  55s   default-scheduler  Successfully assigned default/task-pv-pod to addons-723198
	  Normal  Pulling    55s   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d477w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ts22b" not found
	Error from server (NotFound): pods "helm-test" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-723198 describe pod nginx registry-test task-pv-pod ingress-nginx-admission-create-d477w ingress-nginx-admission-patch-ts22b helm-test: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.86s)

                                                
                                    

Test pass (314/349)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.79
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 3.81
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 14.67
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.55
31 TestOffline 132.13
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 276.56
38 TestAddons/serial/Volcano 42.49
40 TestAddons/serial/GCPAuth/Namespaces 0.12
43 TestAddons/parallel/Ingress 92.77
44 TestAddons/parallel/InspektorGadget 11.82
45 TestAddons/parallel/MetricsServer 6.82
46 TestAddons/parallel/HelmTiller 83.47
48 TestAddons/parallel/CSI 134.45
49 TestAddons/parallel/Headlamp 19.44
50 TestAddons/parallel/CloudSpanner 5.53
51 TestAddons/parallel/LocalPath 69.08
52 TestAddons/parallel/NvidiaDevicePlugin 5.4
53 TestAddons/parallel/Yakd 11.62
54 TestAddons/StoppedEnableDisable 13.55
55 TestCertOptions 105.4
56 TestCertExpiration 331.52
57 TestDockerFlags 54.57
58 TestForceSystemdFlag 84.89
59 TestForceSystemdEnv 74.6
61 TestKVMDriverInstallOrUpdate 3.97
65 TestErrorSpam/setup 49.09
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.19
69 TestErrorSpam/unpause 1.22
70 TestErrorSpam/stop 15.46
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 103.93
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.98
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.22
82 TestFunctional/serial/CacheCmd/cache/add_local 1.23
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.13
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 42.44
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.01
93 TestFunctional/serial/LogsFileCmd 1.03
94 TestFunctional/serial/InvalidService 4.58
96 TestFunctional/parallel/ConfigCmd 0.31
97 TestFunctional/parallel/DashboardCmd 26.2
98 TestFunctional/parallel/DryRun 0.27
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 1.02
104 TestFunctional/parallel/ServiceCmdConnect 7.5
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 46.28
108 TestFunctional/parallel/SSHCmd 0.37
109 TestFunctional/parallel/CpCmd 1.47
110 TestFunctional/parallel/MySQL 37.65
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.27
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
123 TestFunctional/parallel/ProfileCmd/profile_list 0.31
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
125 TestFunctional/parallel/MountCmd/any-port 8.7
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.56
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
133 TestFunctional/parallel/ImageCommands/Setup 1.58
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.81
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
150 TestFunctional/parallel/MountCmd/specific-port 1.99
151 TestFunctional/parallel/ServiceCmd/List 0.44
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.33
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
155 TestFunctional/parallel/ServiceCmd/Format 0.33
156 TestFunctional/parallel/ServiceCmd/URL 0.3
157 TestFunctional/parallel/DockerEnv/bash 0.74
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
161 TestFunctional/delete_echo-server_images 0.04
162 TestFunctional/delete_my-image_image 0.01
163 TestFunctional/delete_minikube_cached_images 0.01
164 TestGvisorAddon 205.73
167 TestMultiControlPlane/serial/StartCluster 218.72
168 TestMultiControlPlane/serial/DeployApp 5.16
169 TestMultiControlPlane/serial/PingHostFromPods 1.24
170 TestMultiControlPlane/serial/AddWorkerNode 62.93
171 TestMultiControlPlane/serial/NodeLabels 0.08
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
173 TestMultiControlPlane/serial/CopyFile 12.45
174 TestMultiControlPlane/serial/StopSecondaryNode 13.22
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.37
176 TestMultiControlPlane/serial/RestartSecondaryNode 47.38
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.52
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 291.83
179 TestMultiControlPlane/serial/DeleteSecondaryNode 4.8
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/StopCluster 28.3
182 TestMultiControlPlane/serial/RestartCluster 144.44
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
184 TestMultiControlPlane/serial/AddSecondaryNode 82.4
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
188 TestImageBuild/serial/Setup 49.64
189 TestImageBuild/serial/NormalBuild 1.98
190 TestImageBuild/serial/BuildWithBuildArg 1.05
191 TestImageBuild/serial/BuildWithDockerIgnore 0.82
192 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
196 TestJSONOutput/start/Command 66.72
197 TestJSONOutput/start/Audit 0
199 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/pause/Command 0.54
203 TestJSONOutput/pause/Audit 0
205 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/unpause/Command 0.5
209 TestJSONOutput/unpause/Audit 0
211 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
214 TestJSONOutput/stop/Command 7.56
215 TestJSONOutput/stop/Audit 0
217 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
218 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
219 TestErrorJSONOutput 0.19
224 TestMainNoArgs 0.04
225 TestMinikubeProfile 100.73
228 TestMountStart/serial/StartWithMountFirst 30.98
229 TestMountStart/serial/VerifyMountFirst 0.36
230 TestMountStart/serial/StartWithMountSecond 27.85
231 TestMountStart/serial/VerifyMountSecond 0.37
232 TestMountStart/serial/DeleteFirst 0.69
233 TestMountStart/serial/VerifyMountPostDelete 0.37
234 TestMountStart/serial/Stop 2.27
235 TestMountStart/serial/RestartStopped 24.44
236 TestMountStart/serial/VerifyMountPostStop 0.39
239 TestMultiNode/serial/FreshStart2Nodes 146.37
240 TestMultiNode/serial/DeployApp2Nodes 5.76
241 TestMultiNode/serial/PingHostFrom2Pods 0.8
242 TestMultiNode/serial/AddNode 54.73
243 TestMultiNode/serial/MultiNodeLabels 0.06
244 TestMultiNode/serial/ProfileList 0.21
245 TestMultiNode/serial/CopyFile 7.21
246 TestMultiNode/serial/StopNode 3.47
247 TestMultiNode/serial/StartAfterStop 42.4
248 TestMultiNode/serial/RestartKeepsNodes 188.81
249 TestMultiNode/serial/DeleteNode 2.3
250 TestMultiNode/serial/StopMultiNode 25.76
251 TestMultiNode/serial/RestartMultiNode 123.27
252 TestMultiNode/serial/ValidateNameConflict 48.96
257 TestPreload 155.25
259 TestScheduledStopUnix 120.78
260 TestSkaffold 127.79
263 TestRunningBinaryUpgrade 172.93
265 TestKubernetesUpgrade 210.1
278 TestStoppedBinaryUpgrade/Setup 0.47
279 TestStoppedBinaryUpgrade/Upgrade 198.52
280 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
282 TestPause/serial/Start 89.33
291 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
292 TestNoKubernetes/serial/StartWithK8s 97.06
293 TestNetworkPlugins/group/auto/Start 73.26
294 TestPause/serial/SecondStartNoReconfiguration 72.8
295 TestNoKubernetes/serial/StartWithStopK8s 9.91
296 TestNoKubernetes/serial/Start 35.06
297 TestPause/serial/Pause 0.64
298 TestNetworkPlugins/group/auto/KubeletFlags 0.21
299 TestNetworkPlugins/group/auto/NetCatPod 10.25
300 TestPause/serial/VerifyStatus 0.25
301 TestPause/serial/Unpause 0.61
302 TestPause/serial/PauseAgain 0.95
303 TestPause/serial/DeletePaused 1.12
304 TestPause/serial/VerifyDeletedResources 0.53
305 TestNetworkPlugins/group/kindnet/Start 87.62
306 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
307 TestNoKubernetes/serial/ProfileList 1.13
308 TestNoKubernetes/serial/Stop 2.29
309 TestNoKubernetes/serial/StartNoArgs 50.68
310 TestNetworkPlugins/group/auto/DNS 0.16
311 TestNetworkPlugins/group/auto/Localhost 0.13
312 TestNetworkPlugins/group/auto/HairPin 0.13
313 TestNetworkPlugins/group/calico/Start 127.15
314 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
315 TestNetworkPlugins/group/custom-flannel/Start 106.85
316 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
317 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
318 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
319 TestNetworkPlugins/group/kindnet/DNS 0.16
320 TestNetworkPlugins/group/kindnet/Localhost 0.14
321 TestNetworkPlugins/group/kindnet/HairPin 0.13
322 TestNetworkPlugins/group/false/Start 75.52
323 TestNetworkPlugins/group/enable-default-cni/Start 117.64
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/KubeletFlags 0.28
326 TestNetworkPlugins/group/calico/NetCatPod 14.33
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
329 TestNetworkPlugins/group/calico/DNS 0.18
330 TestNetworkPlugins/group/calico/Localhost 0.15
331 TestNetworkPlugins/group/calico/HairPin 0.16
332 TestNetworkPlugins/group/custom-flannel/DNS 0.2
333 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
334 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
335 TestNetworkPlugins/group/flannel/Start 82.89
336 TestNetworkPlugins/group/bridge/Start 97.71
337 TestNetworkPlugins/group/false/KubeletFlags 0.23
338 TestNetworkPlugins/group/false/NetCatPod 11.3
339 TestNetworkPlugins/group/false/DNS 0.15
340 TestNetworkPlugins/group/false/Localhost 0.13
341 TestNetworkPlugins/group/false/HairPin 0.15
342 TestNetworkPlugins/group/kubenet/Start 88.35
343 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
344 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
345 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
349 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
350 TestNetworkPlugins/group/flannel/NetCatPod 13.23
352 TestStartStop/group/old-k8s-version/serial/FirstStart 153.65
353 TestNetworkPlugins/group/flannel/DNS 0.22
354 TestNetworkPlugins/group/flannel/Localhost 0.17
355 TestNetworkPlugins/group/flannel/HairPin 0.17
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
357 TestNetworkPlugins/group/bridge/NetCatPod 13.25
359 TestStartStop/group/no-preload/serial/FirstStart 96.36
360 TestNetworkPlugins/group/bridge/DNS 0.19
361 TestNetworkPlugins/group/bridge/Localhost 0.14
362 TestNetworkPlugins/group/bridge/HairPin 0.13
363 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
364 TestNetworkPlugins/group/kubenet/NetCatPod 11.29
366 TestStartStop/group/embed-certs/serial/FirstStart 127.2
367 TestNetworkPlugins/group/kubenet/DNS 0.18
368 TestNetworkPlugins/group/kubenet/Localhost 0.18
369 TestNetworkPlugins/group/kubenet/HairPin 0.13
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 135.41
372 TestStartStop/group/no-preload/serial/DeployApp 10.35
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
374 TestStartStop/group/no-preload/serial/Stop 13.34
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
376 TestStartStop/group/no-preload/serial/SecondStart 300.18
377 TestStartStop/group/old-k8s-version/serial/DeployApp 8.53
378 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
379 TestStartStop/group/embed-certs/serial/DeployApp 9.33
380 TestStartStop/group/old-k8s-version/serial/Stop 12.65
381 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
382 TestStartStop/group/embed-certs/serial/Stop 13.33
383 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
384 TestStartStop/group/old-k8s-version/serial/SecondStart 400.81
385 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
386 TestStartStop/group/embed-certs/serial/SecondStart 328.6
387 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.32
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
389 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.38
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
391 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 305.55
392 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
394 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.67
395 TestStartStop/group/no-preload/serial/Pause 2.35
397 TestStartStop/group/newest-cni/serial/FirstStart 58.64
398 TestStartStop/group/newest-cni/serial/DeployApp 0
399 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
400 TestStartStop/group/newest-cni/serial/Stop 13.36
401 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
402 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
405 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
406 TestStartStop/group/embed-certs/serial/Pause 2.49
407 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
408 TestStartStop/group/newest-cni/serial/SecondStart 38.44
409 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
410 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.48
411 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
412 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
413 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.77
414 TestStartStop/group/newest-cni/serial/Pause 2.5
415 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
417 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
418 TestStartStop/group/old-k8s-version/serial/Pause 2.22
x
+
TestDownloadOnly/v1.20.0/json-events (9.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-751706 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-751706 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (9.791456817s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-751706
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-751706: exit status 85 (59.400309ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |          |
	|         | -p download-only-751706        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:26:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:26:12.102664   12575 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:26:12.102826   12575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:12.102841   12575 out.go:304] Setting ErrFile to fd 2...
	I0802 17:26:12.102848   12575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:12.103343   12575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	W0802 17:26:12.103522   12575 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19355-5398/.minikube/config/config.json: open /home/jenkins/minikube-integration/19355-5398/.minikube/config/config.json: no such file or directory
	I0802 17:26:12.104126   12575 out.go:298] Setting JSON to true
	I0802 17:26:12.104940   12575 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":521,"bootTime":1722619051,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:26:12.104998   12575 start.go:139] virtualization: kvm guest
	I0802 17:26:12.107360   12575 out.go:97] [download-only-751706] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0802 17:26:12.107466   12575 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball: no such file or directory
	I0802 17:26:12.107497   12575 notify.go:220] Checking for updates...
	I0802 17:26:12.108790   12575 out.go:169] MINIKUBE_LOCATION=19355
	I0802 17:26:12.110186   12575 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:26:12.111646   12575 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:26:12.113017   12575 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:26:12.114507   12575 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0802 17:26:12.117119   12575 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 17:26:12.117362   12575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:26:12.221587   12575 out.go:97] Using the kvm2 driver based on user configuration
	I0802 17:26:12.221616   12575 start.go:297] selected driver: kvm2
	I0802 17:26:12.221624   12575 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:26:12.221946   12575 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:12.222069   12575 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5398/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:26:12.236809   12575 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:26:12.236862   12575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:26:12.237340   12575 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0802 17:26:12.237523   12575 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 17:26:12.237589   12575 cni.go:84] Creating CNI manager for ""
	I0802 17:26:12.237610   12575 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0802 17:26:12.237681   12575 start.go:340] cluster config:
	{Name:download-only-751706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-751706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:26:12.237890   12575 iso.go:125] acquiring lock: {Name:mk60a609c45f45520dec0098fa54c9404c4e9236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:12.239872   12575 out.go:97] Downloading VM boot image ...
	I0802 17:26:12.239905   12575 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19355-5398/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:26:16.898173   12575 out.go:97] Starting "download-only-751706" primary control-plane node in "download-only-751706" cluster
	I0802 17:26:16.898202   12575 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 17:26:16.919357   12575 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0802 17:26:16.919390   12575 cache.go:56] Caching tarball of preloaded images
	I0802 17:26:16.919533   12575 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0802 17:26:16.921356   12575 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0802 17:26:16.921380   12575 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0802 17:26:16.948722   12575 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-751706 host does not exist
	  To start a cluster, run: "minikube start -p download-only-751706"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-751706
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (3.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-004188 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-004188 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=kvm2 : (3.814061209s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (3.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-004188
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-004188: exit status 85 (61.937707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-751706        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-751706        | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only        | download-only-004188 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-004188        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:26:22
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:26:22.218664   12785 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:26:22.218823   12785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:22.218834   12785 out.go:304] Setting ErrFile to fd 2...
	I0802 17:26:22.218841   12785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:22.219055   12785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:26:22.219627   12785 out.go:298] Setting JSON to true
	I0802 17:26:22.220452   12785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":531,"bootTime":1722619051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:26:22.220512   12785 start.go:139] virtualization: kvm guest
	I0802 17:26:22.222732   12785 out.go:97] [download-only-004188] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:26:22.222920   12785 notify.go:220] Checking for updates...
	I0802 17:26:22.224389   12785 out.go:169] MINIKUBE_LOCATION=19355
	I0802 17:26:22.225719   12785 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:26:22.227072   12785 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:26:22.228796   12785 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:26:22.230881   12785 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-004188 host does not exist
	  To start a cluster, run: "minikube start -p download-only-004188"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-004188
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (14.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-184388 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-184388 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=kvm2 : (14.669219938s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (14.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-184388
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-184388: exit status 85 (56.419242ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-751706           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-751706           | download-only-751706 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only           | download-only-004188 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-004188           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-004188           | download-only-004188 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only           | download-only-184388 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-184388           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:26:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:26:26.341632   12973 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:26:26.341887   12973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:26.341897   12973 out.go:304] Setting ErrFile to fd 2...
	I0802 17:26:26.341903   12973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:26.342094   12973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:26:26.342668   12973 out.go:298] Setting JSON to true
	I0802 17:26:26.343568   12973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":535,"bootTime":1722619051,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:26:26.343629   12973 start.go:139] virtualization: kvm guest
	I0802 17:26:26.345794   12973 out.go:97] [download-only-184388] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:26:26.345938   12973 notify.go:220] Checking for updates...
	I0802 17:26:26.347408   12973 out.go:169] MINIKUBE_LOCATION=19355
	I0802 17:26:26.348746   12973 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:26:26.350131   12973 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:26:26.351706   12973 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:26:26.353102   12973 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0802 17:26:26.355822   12973 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 17:26:26.356081   12973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:26:26.387131   12973 out.go:97] Using the kvm2 driver based on user configuration
	I0802 17:26:26.387156   12973 start.go:297] selected driver: kvm2
	I0802 17:26:26.387162   12973 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:26:26.387539   12973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:26.387630   12973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5398/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:26:26.401879   12973 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:26:26.401928   12973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:26:26.402550   12973 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0802 17:26:26.402737   12973 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 17:26:26.402764   12973 cni.go:84] Creating CNI manager for ""
	I0802 17:26:26.402778   12973 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0802 17:26:26.402829   12973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:26:26.402903   12973 start.go:340] cluster config:
	{Name:download-only-184388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-184388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:26:26.403015   12973 iso.go:125] acquiring lock: {Name:mk60a609c45f45520dec0098fa54c9404c4e9236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:26.404905   12973 out.go:97] Starting "download-only-184388" primary control-plane node in "download-only-184388" cluster
	I0802 17:26:26.404926   12973 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 17:26:26.427584   12973 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0802 17:26:26.427610   12973 cache.go:56] Caching tarball of preloaded images
	I0802 17:26:26.427779   12973 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 17:26:26.429590   12973 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0802 17:26:26.429610   12973 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0802 17:26:26.451777   12973 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:214beb6d5aadd59deaf940ce47a22f04 -> /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0802 17:26:30.967880   12973 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0802 17:26:30.967977   12973 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-5398/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0802 17:26:31.621701   12973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0802 17:26:31.622015   12973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/download-only-184388/config.json ...
	I0802 17:26:31.622042   12973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/download-only-184388/config.json: {Name:mk531495af4aa24d6b660f750818d6df434151eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:26:31.622194   12973 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0802 17:26:31.622322   12973 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19355-5398/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-184388 host does not exist
	  To start a cluster, run: "minikube start -p download-only-184388"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-184388
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-454723 --alsologtostderr --binary-mirror http://127.0.0.1:40911 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-454723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-454723
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (132.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-667912 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-667912 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m11.108619437s)
helpers_test.go:175: Cleaning up "offline-docker-667912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-667912
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-667912: (1.017312467s)
--- PASS: TestOffline (132.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-723198
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-723198: exit status 85 (47.118944ms)

                                                
                                                
-- stdout --
	* Profile "addons-723198" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723198"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-723198
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-723198: exit status 85 (45.769056ms)

                                                
                                                
-- stdout --
	* Profile "addons-723198" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-723198"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (276.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-723198 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-723198 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m36.563779621s)
--- PASS: TestAddons/Setup (276.56s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.49s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 17.428509ms
addons_test.go:913: volcano-controller stabilized in 17.464638ms
addons_test.go:897: volcano-scheduler stabilized in 17.507408ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-44v7f" [91ba75c5-f4c7-4526-8962-7b59573e2440] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003473385s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-fn649" [b8edb419-a365-402e-8ba4-5f520e2406a4] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004785675s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-dbwg2" [c7a75580-7573-4278-be74-f06e389f4f70] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003767423s
addons_test.go:932: (dbg) Run:  kubectl --context addons-723198 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-723198 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-723198 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [223a2cd1-431f-4799-b434-f0ece518f866] Pending
helpers_test.go:344: "test-job-nginx-0" [223a2cd1-431f-4799-b434-f0ece518f866] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [223a2cd1-431f-4799-b434-f0ece518f866] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.005381541s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable volcano --alsologtostderr -v=1: (10.088612581s)
--- PASS: TestAddons/serial/Volcano (42.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-723198 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-723198 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (92.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-723198 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-723198 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-723198 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b1005b2b-0a95-4352-bbac-188e1a7849ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b1005b2b-0a95-4352-bbac-188e1a7849ae] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 1m22.0042791s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-723198 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.195
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable ingress-dns --alsologtostderr -v=1: (1.813435755s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable ingress --alsologtostderr -v=1: (7.752565654s)
--- PASS: TestAddons/parallel/Ingress (92.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-s6tzv" [150c2d9a-a310-47e0-87f3-eada7a9800ce] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004458104s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-723198
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-723198: (5.810147406s)
--- PASS: TestAddons/parallel/InspektorGadget (11.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.177984ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-gm5qb" [e54a0dd1-9f70-4238-9b98-b5c143ac6901] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004504016s
addons_test.go:417: (dbg) Run:  kubectl --context addons-723198 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (83.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.284177ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-bqrdb" [a7638602-6f74-49ff-819b-f5b6644a9849] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00473945s
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (1m0.095334863s)

                                                
                                                
-- stdout --
	pod "helm-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (69.844425ms)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (85.122086ms)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (56.884598ms)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (61.886712ms)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (58.712942ms)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: exit status 1 (58.409461ms)

                                                
                                                
** stderr ** 
	Error from server (AlreadyExists): object is being deleted: pods "helm-test" already exists

                                                
                                                
** /stderr **
addons_test.go:475: (dbg) Run:  kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-723198 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.730204375s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (83.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (134.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 11.855549ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-723198 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-723198 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [622a71ce-4975-425a-8caf-76e266f8394d] Pending
helpers_test.go:344: "task-pv-pod" [622a71ce-4975-425a-8caf-76e266f8394d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [622a71ce-4975-425a-8caf-76e266f8394d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 1m18.003348162s
addons_test.go:590: (dbg) Run:  kubectl --context addons-723198 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723198 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-723198 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-723198 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-723198 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-723198 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-723198 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cbd2a988-32d0-4d83-8b3d-9ebb81070b5f] Pending
helpers_test.go:344: "task-pv-pod-restore" [cbd2a988-32d0-4d83-8b3d-9ebb81070b5f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cbd2a988-32d0-4d83-8b3d-9ebb81070b5f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003820074s
addons_test.go:632: (dbg) Run:  kubectl --context addons-723198 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-723198 delete pod task-pv-pod-restore: (1.209903082s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-723198 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-723198 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.640722546s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (134.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-723198 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-5hjk8" [11b1e4ee-ec1d-4c91-8b5c-dd2d599376e4] Pending
helpers_test.go:344: "headlamp-7867546754-5hjk8" [11b1e4ee-ec1d-4c91-8b5c-dd2d599376e4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5hjk8" [11b1e4ee-ec1d-4c91-8b5c-dd2d599376e4] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5hjk8" [11b1e4ee-ec1d-4c91-8b5c-dd2d599376e4] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003635595s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable headlamp --alsologtostderr -v=1: (5.653820102s)
--- PASS: TestAddons/parallel/Headlamp (19.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-26pff" [4fae51a8-ab04-485e-8dd1-c5fa4fb48e27] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004583186s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-723198
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (69.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-723198 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-723198 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [56c68713-23ce-4bd1-8868-20766882c739] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [56c68713-23ce-4bd1-8868-20766882c739] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [56c68713-23ce-4bd1-8868-20766882c739] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004146891s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-723198 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 ssh "cat /opt/local-path-provisioner/pvc-09a565ff-7318-4534-8fe5-a1f812b79826_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-723198 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-723198 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.314765396s)
--- PASS: TestAddons/parallel/LocalPath (69.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g2nk5" [45436a2b-b5fe-4f2a-aac8-f00a5be8dacf] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005066627s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-723198
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.40s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-rkj2g" [4e9ba445-6ee4-42b7-bcaa-03722f6fcddd] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003895183s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-723198 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-723198 addons disable yakd --alsologtostderr -v=1: (5.612454412s)
--- PASS: TestAddons/parallel/Yakd (11.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-723198
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-723198: (13.288331151s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-723198
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-723198
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-723198
--- PASS: TestAddons/StoppedEnableDisable (13.55s)

                                                
                                    
x
+
TestCertOptions (105.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-247280 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-247280 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m43.744618544s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-247280 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-247280 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-247280 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-247280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-247280
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-247280: (1.208907383s)
--- PASS: TestCertOptions (105.40s)

                                                
                                    
x
+
TestCertExpiration (331.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-090552 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-090552 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m18.261555282s)
E0802 18:24:21.993061   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-090552 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-090552 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m12.12000939s)
helpers_test.go:175: Cleaning up "cert-expiration-090552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-090552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-090552: (1.135698421s)
--- PASS: TestCertExpiration (331.52s)

                                                
                                    
x
+
TestDockerFlags (54.57s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-912643 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-912643 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (52.459651519s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-912643 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-912643 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-912643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-912643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-912643: (1.632869112s)
--- PASS: TestDockerFlags (54.57s)

                                                
                                    
x
+
TestForceSystemdFlag (84.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-492361 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-492361 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m23.036372685s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-492361 ssh "docker info --format {{.CgroupDriver}}"
E0802 18:23:18.607528   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:18.613078   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:18.623421   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:18.643740   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:18.684054   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:18.764955   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:18.925761   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "force-systemd-flag-492361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-492361
E0802 18:23:19.246404   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:19.887454   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-492361: (1.441976996s)
--- PASS: TestForceSystemdFlag (84.89s)

                                                
                                    
x
+
TestForceSystemdEnv (74.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-320428 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-320428 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m13.250533533s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-320428 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-320428" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-320428
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-320428: (1.063075346s)
--- PASS: TestForceSystemdEnv (74.60s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.97s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.97s)

                                                
                                    
x
+
TestErrorSpam/setup (49.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-359132 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-359132 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-359132 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-359132 --driver=kvm2 : (49.092227622s)
--- PASS: TestErrorSpam/setup (49.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 unpause
--- PASS: TestErrorSpam/unpause (1.22s)

                                                
                                    
x
+
TestErrorSpam/stop (15.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 stop: (12.419347505s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 stop: (1.565544563s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-359132 --log_dir /tmp/nospam-359132 stop: (1.471386185s)
--- PASS: TestErrorSpam/stop (15.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19355-5398/.minikube/files/etc/test/nested/copy/12563/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (103.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933143 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E0802 17:36:18.809352   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:18.815040   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:18.825311   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:18.845631   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:18.885916   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:18.966267   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:19.126716   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:19.447299   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:20.088327   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:21.368845   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:23.930848   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:29.051837   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:39.292610   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:36:59.773073   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:37:40.734722   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-933143 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m43.930525069s)
--- PASS: TestFunctional/serial/StartWithProxy (103.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933143 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-933143 --alsologtostderr -v=8: (37.977755383s)
functional_test.go:659: soft start took 37.978417584s for "functional-933143" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-933143 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-933143 /tmp/TestFunctionalserialCacheCmdcacheadd_local2850703944/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cache add minikube-local-cache-test:functional-933143
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cache delete minikube-local-cache-test:functional-933143
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-933143
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (216.073031ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 kubectl -- --context functional-933143 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-933143 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933143 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0802 17:39:02.654930   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-933143 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.435803102s)
functional_test.go:757: restart took 42.435911095s for "functional-933143" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-933143 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-933143 logs: (1.011039561s)
--- PASS: TestFunctional/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 logs --file /tmp/TestFunctionalserialLogsFileCmd2394428136/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-933143 logs --file /tmp/TestFunctionalserialLogsFileCmd2394428136/001/logs.txt: (1.028961238s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-933143 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-933143
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-933143: exit status 115 (263.494298ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.77:32130 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-933143 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-933143 delete -f testdata/invalidsvc.yaml: (1.12397313s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 config get cpus: exit status 14 (61.122785ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 config get cpus: exit status 14 (41.369057ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-933143 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-933143 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21506: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933143 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-933143 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (129.855637ms)

                                                
                                                
-- stdout --
	* [functional-933143] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:39:23.507457   20620 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:39:23.507739   20620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:39:23.507752   20620 out.go:304] Setting ErrFile to fd 2...
	I0802 17:39:23.507758   20620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:39:23.508025   20620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:39:23.508689   20620 out.go:298] Setting JSON to false
	I0802 17:39:23.509916   20620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1313,"bootTime":1722619051,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:39:23.509996   20620 start.go:139] virtualization: kvm guest
	I0802 17:39:23.512323   20620 out.go:177] * [functional-933143] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:39:23.513825   20620 notify.go:220] Checking for updates...
	I0802 17:39:23.513855   20620 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:39:23.515366   20620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:39:23.516783   20620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:39:23.518208   20620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:39:23.519664   20620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:39:23.521230   20620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:39:23.523082   20620 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:39:23.523512   20620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:39:23.523569   20620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:39:23.538716   20620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0802 17:39:23.539192   20620 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:39:23.539741   20620 main.go:141] libmachine: Using API Version  1
	I0802 17:39:23.539765   20620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:39:23.540172   20620 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:39:23.540355   20620 main.go:141] libmachine: (functional-933143) Calling .DriverName
	I0802 17:39:23.540625   20620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:39:23.540965   20620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:39:23.541000   20620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:39:23.555983   20620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0802 17:39:23.556417   20620 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:39:23.556886   20620 main.go:141] libmachine: Using API Version  1
	I0802 17:39:23.556912   20620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:39:23.557245   20620 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:39:23.557466   20620 main.go:141] libmachine: (functional-933143) Calling .DriverName
	I0802 17:39:23.590616   20620 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 17:39:23.591895   20620 start.go:297] selected driver: kvm2
	I0802 17:39:23.591911   20620 start.go:901] validating driver "kvm2" against &{Name:functional-933143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-933143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:39:23.592066   20620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:39:23.594150   20620 out.go:177] 
	W0802 17:39:23.595386   20620 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0802 17:39:23.596778   20620 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933143 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-933143 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-933143 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (142.637794ms)

                                                
                                                
-- stdout --
	* [functional-933143] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:39:23.788848   20686 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:39:23.788968   20686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:39:23.789002   20686 out.go:304] Setting ErrFile to fd 2...
	I0802 17:39:23.789011   20686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:39:23.789382   20686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:39:23.789942   20686 out.go:298] Setting JSON to false
	I0802 17:39:23.790983   20686 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1313,"bootTime":1722619051,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:39:23.791066   20686 start.go:139] virtualization: kvm guest
	I0802 17:39:23.793373   20686 out.go:177] * [functional-933143] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0802 17:39:23.794892   20686 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:39:23.794896   20686 notify.go:220] Checking for updates...
	I0802 17:39:23.797273   20686 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:39:23.798645   20686 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	I0802 17:39:23.799870   20686 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	I0802 17:39:23.801044   20686 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:39:23.802281   20686 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:39:23.803883   20686 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:39:23.804295   20686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:39:23.804365   20686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:39:23.822976   20686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0802 17:39:23.823455   20686 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:39:23.823947   20686 main.go:141] libmachine: Using API Version  1
	I0802 17:39:23.823976   20686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:39:23.824276   20686 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:39:23.824462   20686 main.go:141] libmachine: (functional-933143) Calling .DriverName
	I0802 17:39:23.824722   20686 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:39:23.825107   20686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:39:23.825155   20686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:39:23.839847   20686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0802 17:39:23.840205   20686 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:39:23.840667   20686 main.go:141] libmachine: Using API Version  1
	I0802 17:39:23.840683   20686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:39:23.841005   20686 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:39:23.841258   20686 main.go:141] libmachine: (functional-933143) Calling .DriverName
	I0802 17:39:23.874405   20686 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0802 17:39:23.875629   20686 start.go:297] selected driver: kvm2
	I0802 17:39:23.875645   20686 start.go:901] validating driver "kvm2" against &{Name:functional-933143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-933143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:39:23.875771   20686 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:39:23.877823   20686 out.go:177] 
	W0802 17:39:23.879291   20686 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0802 17:39:23.880630   20686 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-933143 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-933143 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-dhwgk" [fcf70585-0ed2-48b6-a74c-3ac673c38c58] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-dhwgk" [fcf70585-0ed2-48b6-a74c-3ac673c38c58] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006594778s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.77:30409
functional_test.go:1671: http://192.168.39.77:30409: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-dhwgk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.77:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.77:30409
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [80644890-fde4-4744-a0f2-d56b0fcafef2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005028434s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-933143 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-933143 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-933143 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-933143 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-933143 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4c190173-c032-412c-a755-a96d89200a22] Pending
helpers_test.go:344: "sp-pod" [4c190173-c032-412c-a755-a96d89200a22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4c190173-c032-412c-a755-a96d89200a22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.197089126s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-933143 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-933143 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-933143 delete -f testdata/storage-provisioner/pod.yaml: (1.989262886s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-933143 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a63b0707-afdf-43d6-b066-41b7416d6bb8] Pending
helpers_test.go:344: "sp-pod" [a63b0707-afdf-43d6-b066-41b7416d6bb8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/08/02 17:39:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [a63b0707-afdf-43d6-b066-41b7416d6bb8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.003380933s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-933143 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.28s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh -n functional-933143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cp functional-933143:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1591189106/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh -n functional-933143 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh -n functional-933143 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-933143 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-rzz45" [14cf0bfb-2169-4d2e-97b6-aebb61ce404e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-rzz45" [14cf0bfb-2169-4d2e-97b6-aebb61ce404e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.004956979s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;": exit status 1 (139.9515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;": exit status 1 (141.417069ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;": exit status 1 (123.969231ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-933143 exec mysql-64454c8b5c-rzz45 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12563/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /etc/test/nested/copy/12563/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12563.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /etc/ssl/certs/12563.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12563.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /usr/share/ca-certificates/12563.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/125632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /etc/ssl/certs/125632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/125632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /usr/share/ca-certificates/125632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-933143 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh "sudo systemctl is-active crio": exit status 1 (197.361735ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-933143 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-933143 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-whjnd" [a8d93b18-8408-49de-a154-757f1983817b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-whjnd" [a8d93b18-8408-49de-a154-757f1983817b] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005114032s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "267.817974ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "42.57436ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "276.052802ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "46.092214ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdany-port2710500565/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722620362774232524" to /tmp/TestFunctionalparallelMountCmdany-port2710500565/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722620362774232524" to /tmp/TestFunctionalparallelMountCmdany-port2710500565/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722620362774232524" to /tmp/TestFunctionalparallelMountCmdany-port2710500565/001/test-1722620362774232524
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (226.772755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  2 17:39 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  2 17:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  2 17:39 test-1722620362774232524
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh cat /mount-9p/test-1722620362774232524
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-933143 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a5c7799b-6114-4f80-9842-f71165d30e1d] Pending
helpers_test.go:344: "busybox-mount" [a5c7799b-6114-4f80-9842-f71165d30e1d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a5c7799b-6114-4f80-9842-f71165d30e1d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a5c7799b-6114-4f80-9842-f71165d30e1d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005974563s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-933143 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdany-port2710500565/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933143 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-933143
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-933143
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933143 image ls --format short --alsologtostderr:
I0802 17:39:43.062593   22454 out.go:291] Setting OutFile to fd 1 ...
I0802 17:39:43.062734   22454 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:43.062746   22454 out.go:304] Setting ErrFile to fd 2...
I0802 17:39:43.062753   22454 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:43.063181   22454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
I0802 17:39:43.063989   22454 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:43.064144   22454 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:43.064652   22454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:43.064698   22454 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:43.080261   22454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43803
I0802 17:39:43.080734   22454 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:43.081313   22454 main.go:141] libmachine: Using API Version  1
I0802 17:39:43.081339   22454 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:43.081724   22454 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:43.081917   22454 main.go:141] libmachine: (functional-933143) Calling .GetState
I0802 17:39:43.084091   22454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:43.084136   22454 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:43.099645   22454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
I0802 17:39:43.100067   22454 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:43.100611   22454 main.go:141] libmachine: Using API Version  1
I0802 17:39:43.100637   22454 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:43.101010   22454 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:43.101235   22454 main.go:141] libmachine: (functional-933143) Calling .DriverName
I0802 17:39:43.101438   22454 ssh_runner.go:195] Run: systemctl --version
I0802 17:39:43.101474   22454 main.go:141] libmachine: (functional-933143) Calling .GetSSHHostname
I0802 17:39:43.104555   22454 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:43.104944   22454 main.go:141] libmachine: (functional-933143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:0c:7f", ip: ""} in network mk-functional-933143: {Iface:virbr1 ExpiryTime:2024-08-02 18:36:19 +0000 UTC Type:0 Mac:52:54:00:d2:0c:7f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-933143 Clientid:01:52:54:00:d2:0c:7f}
I0802 17:39:43.104987   22454 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined IP address 192.168.39.77 and MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:43.105134   22454 main.go:141] libmachine: (functional-933143) Calling .GetSSHPort
I0802 17:39:43.105312   22454 main.go:141] libmachine: (functional-933143) Calling .GetSSHKeyPath
I0802 17:39:43.105461   22454 main.go:141] libmachine: (functional-933143) Calling .GetSSHUsername
I0802 17:39:43.105615   22454 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/functional-933143/id_rsa Username:docker}
I0802 17:39:43.185500   22454 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0802 17:39:43.222938   22454 main.go:141] libmachine: Making call to close driver server
I0802 17:39:43.222954   22454 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:43.223232   22454 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:43.223253   22454 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:39:43.223268   22454 main.go:141] libmachine: Making call to close driver server
I0802 17:39:43.223273   22454 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:43.223276   22454 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:43.223538   22454 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:43.223554   22454 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933143 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-933143 | 9320cdec0f128 | 30B    |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-933143 | 9056ab77afb8e | 4.94MB |
| docker.io/localhost/my-image                | functional-933143 | 322f7f6b5f21d | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933143 image ls --format table --alsologtostderr:
I0802 17:39:47.622801   22645 out.go:291] Setting OutFile to fd 1 ...
I0802 17:39:47.622918   22645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:47.622930   22645 out.go:304] Setting ErrFile to fd 2...
I0802 17:39:47.622937   22645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:47.623237   22645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
I0802 17:39:47.623991   22645 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:47.624146   22645 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:47.624592   22645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:47.624639   22645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:47.642171   22645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34207
I0802 17:39:47.642807   22645 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:47.643388   22645 main.go:141] libmachine: Using API Version  1
I0802 17:39:47.643410   22645 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:47.644711   22645 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:47.644909   22645 main.go:141] libmachine: (functional-933143) Calling .GetState
I0802 17:39:47.647125   22645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:47.647173   22645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:47.663375   22645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43875
I0802 17:39:47.663790   22645 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:47.664237   22645 main.go:141] libmachine: Using API Version  1
I0802 17:39:47.664256   22645 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:47.664688   22645 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:47.664894   22645 main.go:141] libmachine: (functional-933143) Calling .DriverName
I0802 17:39:47.665126   22645 ssh_runner.go:195] Run: systemctl --version
I0802 17:39:47.665157   22645 main.go:141] libmachine: (functional-933143) Calling .GetSSHHostname
I0802 17:39:47.667874   22645 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:47.668246   22645 main.go:141] libmachine: (functional-933143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:0c:7f", ip: ""} in network mk-functional-933143: {Iface:virbr1 ExpiryTime:2024-08-02 18:36:19 +0000 UTC Type:0 Mac:52:54:00:d2:0c:7f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-933143 Clientid:01:52:54:00:d2:0c:7f}
I0802 17:39:47.668271   22645 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined IP address 192.168.39.77 and MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:47.668430   22645 main.go:141] libmachine: (functional-933143) Calling .GetSSHPort
I0802 17:39:47.668594   22645 main.go:141] libmachine: (functional-933143) Calling .GetSSHKeyPath
I0802 17:39:47.668742   22645 main.go:141] libmachine: (functional-933143) Calling .GetSSHUsername
I0802 17:39:47.668879   22645 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/functional-933143/id_rsa Username:docker}
I0802 17:39:47.755233   22645 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0802 17:39:47.788864   22645 main.go:141] libmachine: Making call to close driver server
I0802 17:39:47.788887   22645 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:47.789209   22645 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:47.789238   22645 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:47.789241   22645 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:39:47.789262   22645 main.go:141] libmachine: Making call to close driver server
I0802 17:39:47.789269   22645 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:47.789480   22645 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:47.789500   22645 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:47.789508   22645 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933143 image ls --format json --alsologtostderr:
[{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-933143"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3b
db1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"322f7f6b5f21d9af8aaf88e8050ad1c334e7156c94db22d9e6d26122a026eb25","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-933143"],"size":"1240000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"9320cdec0f12876ccc6481b2da36f48e6fffdfdd7b3cb2d6040d9d53c89b86ee","repoDigests":[],"repoTags":["docker.
io/library/minikube-local-cache-test:functional-933143"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933143 image ls --format json --alsologtostderr:
I0802 17:39:47.430578   22621 out.go:291] Setting OutFile to fd 1 ...
I0802 17:39:47.431014   22621 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:47.431029   22621 out.go:304] Setting ErrFile to fd 2...
I0802 17:39:47.431036   22621 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:47.431458   22621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
I0802 17:39:47.432520   22621 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:47.432625   22621 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:47.432990   22621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:47.433037   22621 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:47.448574   22621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
I0802 17:39:47.449071   22621 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:47.449744   22621 main.go:141] libmachine: Using API Version  1
I0802 17:39:47.449768   22621 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:47.450089   22621 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:47.450268   22621 main.go:141] libmachine: (functional-933143) Calling .GetState
I0802 17:39:47.452176   22621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:47.452215   22621 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:47.467019   22621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42833
I0802 17:39:47.467462   22621 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:47.467892   22621 main.go:141] libmachine: Using API Version  1
I0802 17:39:47.467911   22621 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:47.468256   22621 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:47.468470   22621 main.go:141] libmachine: (functional-933143) Calling .DriverName
I0802 17:39:47.468711   22621 ssh_runner.go:195] Run: systemctl --version
I0802 17:39:47.468740   22621 main.go:141] libmachine: (functional-933143) Calling .GetSSHHostname
I0802 17:39:47.471485   22621 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:47.471864   22621 main.go:141] libmachine: (functional-933143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:0c:7f", ip: ""} in network mk-functional-933143: {Iface:virbr1 ExpiryTime:2024-08-02 18:36:19 +0000 UTC Type:0 Mac:52:54:00:d2:0c:7f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-933143 Clientid:01:52:54:00:d2:0c:7f}
I0802 17:39:47.471894   22621 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined IP address 192.168.39.77 and MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:47.472030   22621 main.go:141] libmachine: (functional-933143) Calling .GetSSHPort
I0802 17:39:47.472180   22621 main.go:141] libmachine: (functional-933143) Calling .GetSSHKeyPath
I0802 17:39:47.472334   22621 main.go:141] libmachine: (functional-933143) Calling .GetSSHUsername
I0802 17:39:47.472495   22621 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/functional-933143/id_rsa Username:docker}
I0802 17:39:47.549429   22621 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0802 17:39:47.572288   22621 main.go:141] libmachine: Making call to close driver server
I0802 17:39:47.572299   22621 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:47.572562   22621 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:47.572580   22621 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:39:47.572602   22621 main.go:141] libmachine: Making call to close driver server
I0802 17:39:47.572606   22621 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:47.572615   22621 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:47.572937   22621 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:47.572958   22621 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-933143 image ls --format yaml --alsologtostderr:
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 9320cdec0f12876ccc6481b2da36f48e6fffdfdd7b3cb2d6040d9d53c89b86ee
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-933143
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-933143
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933143 image ls --format yaml --alsologtostderr:
I0802 17:39:43.272929   22478 out.go:291] Setting OutFile to fd 1 ...
I0802 17:39:43.273388   22478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:43.273449   22478 out.go:304] Setting ErrFile to fd 2...
I0802 17:39:43.273514   22478 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:43.273954   22478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
I0802 17:39:43.274962   22478 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:43.275074   22478 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:43.275567   22478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:43.275616   22478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:43.290653   22478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34417
I0802 17:39:43.291140   22478 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:43.291724   22478 main.go:141] libmachine: Using API Version  1
I0802 17:39:43.291750   22478 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:43.292125   22478 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:43.292272   22478 main.go:141] libmachine: (functional-933143) Calling .GetState
I0802 17:39:43.294197   22478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:43.294238   22478 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:43.313071   22478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
I0802 17:39:43.313535   22478 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:43.314029   22478 main.go:141] libmachine: Using API Version  1
I0802 17:39:43.314055   22478 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:43.314428   22478 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:43.314648   22478 main.go:141] libmachine: (functional-933143) Calling .DriverName
I0802 17:39:43.314846   22478 ssh_runner.go:195] Run: systemctl --version
I0802 17:39:43.314869   22478 main.go:141] libmachine: (functional-933143) Calling .GetSSHHostname
I0802 17:39:43.317766   22478 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:43.318236   22478 main.go:141] libmachine: (functional-933143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:0c:7f", ip: ""} in network mk-functional-933143: {Iface:virbr1 ExpiryTime:2024-08-02 18:36:19 +0000 UTC Type:0 Mac:52:54:00:d2:0c:7f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-933143 Clientid:01:52:54:00:d2:0c:7f}
I0802 17:39:43.318268   22478 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined IP address 192.168.39.77 and MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:43.318383   22478 main.go:141] libmachine: (functional-933143) Calling .GetSSHPort
I0802 17:39:43.318556   22478 main.go:141] libmachine: (functional-933143) Calling .GetSSHKeyPath
I0802 17:39:43.318693   22478 main.go:141] libmachine: (functional-933143) Calling .GetSSHUsername
I0802 17:39:43.318830   22478 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/functional-933143/id_rsa Username:docker}
I0802 17:39:43.398101   22478 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0802 17:39:43.430651   22478 main.go:141] libmachine: Making call to close driver server
I0802 17:39:43.430667   22478 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:43.430992   22478 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:43.431010   22478 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:39:43.431012   22478 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:43.431019   22478 main.go:141] libmachine: Making call to close driver server
I0802 17:39:43.431028   22478 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:43.431267   22478 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:43.431284   22478 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:43.431299   22478 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh pgrep buildkitd: exit status 1 (211.332612ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image build -t localhost/my-image:functional-933143 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-933143 image build -t localhost/my-image:functional-933143 testdata/build --alsologtostderr: (3.538463227s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-933143 image build -t localhost/my-image:functional-933143 testdata/build --alsologtostderr:
I0802 17:39:43.692912   22532 out.go:291] Setting OutFile to fd 1 ...
I0802 17:39:43.693058   22532 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:43.693068   22532 out.go:304] Setting ErrFile to fd 2...
I0802 17:39:43.693072   22532 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:39:43.693287   22532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
I0802 17:39:43.693838   22532 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:43.694389   22532 config.go:182] Loaded profile config "functional-933143": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0802 17:39:43.694784   22532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:43.694841   22532 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:43.709829   22532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
I0802 17:39:43.710347   22532 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:43.710978   22532 main.go:141] libmachine: Using API Version  1
I0802 17:39:43.710999   22532 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:43.711355   22532 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:43.711586   22532 main.go:141] libmachine: (functional-933143) Calling .GetState
I0802 17:39:43.713447   22532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0802 17:39:43.713495   22532 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:39:43.729039   22532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
I0802 17:39:43.729432   22532 main.go:141] libmachine: () Calling .GetVersion
I0802 17:39:43.729935   22532 main.go:141] libmachine: Using API Version  1
I0802 17:39:43.729955   22532 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:39:43.730321   22532 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:39:43.730519   22532 main.go:141] libmachine: (functional-933143) Calling .DriverName
I0802 17:39:43.730738   22532 ssh_runner.go:195] Run: systemctl --version
I0802 17:39:43.730760   22532 main.go:141] libmachine: (functional-933143) Calling .GetSSHHostname
I0802 17:39:43.734126   22532 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:43.734565   22532 main.go:141] libmachine: (functional-933143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:0c:7f", ip: ""} in network mk-functional-933143: {Iface:virbr1 ExpiryTime:2024-08-02 18:36:19 +0000 UTC Type:0 Mac:52:54:00:d2:0c:7f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-933143 Clientid:01:52:54:00:d2:0c:7f}
I0802 17:39:43.734595   22532 main.go:141] libmachine: (functional-933143) DBG | domain functional-933143 has defined IP address 192.168.39.77 and MAC address 52:54:00:d2:0c:7f in network mk-functional-933143
I0802 17:39:43.734705   22532 main.go:141] libmachine: (functional-933143) Calling .GetSSHPort
I0802 17:39:43.734892   22532 main.go:141] libmachine: (functional-933143) Calling .GetSSHKeyPath
I0802 17:39:43.735036   22532 main.go:141] libmachine: (functional-933143) Calling .GetSSHUsername
I0802 17:39:43.735192   22532 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/functional-933143/id_rsa Username:docker}
I0802 17:39:43.814582   22532 build_images.go:161] Building image from path: /tmp/build.3291754167.tar
I0802 17:39:43.814658   22532 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0802 17:39:43.829809   22532 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3291754167.tar
I0802 17:39:43.839604   22532 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3291754167.tar: stat -c "%s %y" /var/lib/minikube/build/build.3291754167.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3291754167.tar': No such file or directory
I0802 17:39:43.839643   22532 ssh_runner.go:362] scp /tmp/build.3291754167.tar --> /var/lib/minikube/build/build.3291754167.tar (3072 bytes)
I0802 17:39:43.874564   22532 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3291754167
I0802 17:39:43.887885   22532 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3291754167 -xf /var/lib/minikube/build/build.3291754167.tar
I0802 17:39:43.902951   22532 docker.go:360] Building image: /var/lib/minikube/build/build.3291754167
I0802 17:39:43.903051   22532 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-933143 /var/lib/minikube/build/build.3291754167
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B 0.0s done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B 0.0s done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:322f7f6b5f21d9af8aaf88e8050ad1c334e7156c94db22d9e6d26122a026eb25 done
#8 naming to localhost/my-image:functional-933143 0.0s done
#8 DONE 0.1s
I0802 17:39:47.144531   22532 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-933143 /var/lib/minikube/build/build.3291754167: (3.241448818s)
I0802 17:39:47.144629   22532 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3291754167
I0802 17:39:47.164629   22532 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3291754167.tar
I0802 17:39:47.180111   22532 build_images.go:217] Built localhost/my-image:functional-933143 from /tmp/build.3291754167.tar
I0802 17:39:47.180148   22532 build_images.go:133] succeeded building to: functional-933143
I0802 17:39:47.180155   22532 build_images.go:134] failed building to: 
I0802 17:39:47.180176   22532 main.go:141] libmachine: Making call to close driver server
I0802 17:39:47.180188   22532 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:47.180479   22532 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:47.180497   22532 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:39:47.180505   22532 main.go:141] libmachine: Making call to close driver server
I0802 17:39:47.180509   22532 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
I0802 17:39:47.180513   22532 main.go:141] libmachine: (functional-933143) Calling .Close
I0802 17:39:47.180746   22532 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:39:47.180760   22532 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:39:47.180777   22532 main.go:141] libmachine: (functional-933143) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.530739952s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-933143
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image load --daemon docker.io/kicbase/echo-server:functional-933143 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image load --daemon docker.io/kicbase/echo-server:functional-933143 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-933143
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image load --daemon docker.io/kicbase/echo-server:functional-933143 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image save docker.io/kicbase/echo-server:functional-933143 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image rm docker.io/kicbase/echo-server:functional-933143 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-933143
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 image save --daemon docker.io/kicbase/echo-server:functional-933143 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-933143
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdspecific-port2258219235/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (219.781786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdspecific-port2258219235/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh "sudo umount -f /mount-9p": exit status 1 (227.313498ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-933143 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdspecific-port2258219235/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 service list -o json
functional_test.go:1490: Took "531.322741ms" to run "out/minikube-linux-amd64 -p functional-933143 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823034988/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823034988/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823034988/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T" /mount1: exit status 1 (262.999377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-933143 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823034988/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823034988/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-933143 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823034988/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.77:32432
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.77:32432
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-933143 docker-env) && out/minikube-linux-amd64 status -p functional-933143"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-933143 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-933143 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-933143
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-933143
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-933143
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (205.73s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-349582 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-349582 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (54.351700933s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-349582 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-349582 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.241789775s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-349582 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-349582 addons enable gvisor: (4.159764562s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [eb182d26-7b9a-457c-9b8e-712cfda27309] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004990934s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-349582 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [eced7063-0dd1-4fdc-ac17-31dbb4db2e6e] Pending
helpers_test.go:344: "nginx-gvisor" [eced7063-0dd1-4fdc-ac17-31dbb4db2e6e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [eced7063-0dd1-4fdc-ac17-31dbb4db2e6e] Running
E0802 18:23:39.091065   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 52.004332557s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-349582
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-349582: (2.327593025s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-349582 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-349582 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (52.445150245s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [eb182d26-7b9a-457c-9b8e-712cfda27309] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
E0802 18:24:40.532215   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
helpers_test.go:344: "gvisor" [eb182d26-7b9a-457c-9b8e-712cfda27309] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004933052s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [eced7063-0dd1-4fdc-ac17-31dbb4db2e6e] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.00372406s
helpers_test.go:175: Cleaning up "gvisor-349582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-349582
--- PASS: TestGvisorAddon (205.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (218.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-049078 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0802 17:41:18.808187   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 17:41:46.496105   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-049078 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m38.065256648s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (218.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-049078 -- rollout status deployment/busybox: (2.982378513s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-8mr7k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-fvq6m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-h72gp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-8mr7k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-fvq6m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-h72gp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-8mr7k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-fvq6m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-h72gp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-8mr7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-8mr7k -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-fvq6m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-fvq6m -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-h72gp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-049078 -- exec busybox-fc5497c4f-h72gp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (62.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-049078 -v=7 --alsologtostderr
E0802 17:44:21.992255   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:21.997559   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:22.007870   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:22.028219   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:22.068710   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:22.149138   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:22.309583   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:22.630082   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:23.271012   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:24.551995   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:27.112321   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:32.232595   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:44:42.473344   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:45:02.953599   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-049078 -v=7 --alsologtostderr: (1m2.065510657s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (62.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-049078 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp testdata/cp-test.txt ha-049078:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576961578/001/cp-test_ha-049078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078:/home/docker/cp-test.txt ha-049078-m02:/home/docker/cp-test_ha-049078_ha-049078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test_ha-049078_ha-049078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078:/home/docker/cp-test.txt ha-049078-m03:/home/docker/cp-test_ha-049078_ha-049078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test_ha-049078_ha-049078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078:/home/docker/cp-test.txt ha-049078-m04:/home/docker/cp-test_ha-049078_ha-049078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test_ha-049078_ha-049078-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp testdata/cp-test.txt ha-049078-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576961578/001/cp-test_ha-049078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m02:/home/docker/cp-test.txt ha-049078:/home/docker/cp-test_ha-049078-m02_ha-049078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test_ha-049078-m02_ha-049078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m02:/home/docker/cp-test.txt ha-049078-m03:/home/docker/cp-test_ha-049078-m02_ha-049078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test_ha-049078-m02_ha-049078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m02:/home/docker/cp-test.txt ha-049078-m04:/home/docker/cp-test_ha-049078-m02_ha-049078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test_ha-049078-m02_ha-049078-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp testdata/cp-test.txt ha-049078-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576961578/001/cp-test_ha-049078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m03:/home/docker/cp-test.txt ha-049078:/home/docker/cp-test_ha-049078-m03_ha-049078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test_ha-049078-m03_ha-049078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m03:/home/docker/cp-test.txt ha-049078-m02:/home/docker/cp-test_ha-049078-m03_ha-049078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test_ha-049078-m03_ha-049078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m03:/home/docker/cp-test.txt ha-049078-m04:/home/docker/cp-test_ha-049078-m03_ha-049078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test_ha-049078-m03_ha-049078-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp testdata/cp-test.txt ha-049078-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576961578/001/cp-test_ha-049078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m04:/home/docker/cp-test.txt ha-049078:/home/docker/cp-test_ha-049078-m04_ha-049078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078 "sudo cat /home/docker/cp-test_ha-049078-m04_ha-049078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m04:/home/docker/cp-test.txt ha-049078-m02:/home/docker/cp-test_ha-049078-m04_ha-049078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m02 "sudo cat /home/docker/cp-test_ha-049078-m04_ha-049078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 cp ha-049078-m04:/home/docker/cp-test.txt ha-049078-m03:/home/docker/cp-test_ha-049078-m04_ha-049078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 ssh -n ha-049078-m03 "sudo cat /home/docker/cp-test_ha-049078-m04_ha-049078-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-049078 node stop m02 -v=7 --alsologtostderr: (12.627770068s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr: exit status 7 (592.906681ms)

                                                
                                                
-- stdout --
	ha-049078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-049078-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-049078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-049078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:45:29.621610   27174 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:45:29.621743   27174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:45:29.621754   27174 out.go:304] Setting ErrFile to fd 2...
	I0802 17:45:29.621761   27174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:45:29.621934   27174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:45:29.622163   27174 out.go:298] Setting JSON to false
	I0802 17:45:29.622198   27174 mustload.go:65] Loading cluster: ha-049078
	I0802 17:45:29.622315   27174 notify.go:220] Checking for updates...
	I0802 17:45:29.622729   27174 config.go:182] Loaded profile config "ha-049078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:45:29.622749   27174 status.go:255] checking status of ha-049078 ...
	I0802 17:45:29.623313   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.623358   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.640960   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43361
	I0802 17:45:29.641528   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.642049   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.642070   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.642463   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.642679   27174 main.go:141] libmachine: (ha-049078) Calling .GetState
	I0802 17:45:29.644412   27174 status.go:330] ha-049078 host status = "Running" (err=<nil>)
	I0802 17:45:29.644430   27174 host.go:66] Checking if "ha-049078" exists ...
	I0802 17:45:29.644701   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.644742   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.659659   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0802 17:45:29.660073   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.660505   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.660527   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.660831   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.660994   27174 main.go:141] libmachine: (ha-049078) Calling .GetIP
	I0802 17:45:29.663794   27174 main.go:141] libmachine: (ha-049078) DBG | domain ha-049078 has defined MAC address 52:54:00:04:8c:e8 in network mk-ha-049078
	I0802 17:45:29.664318   27174 main.go:141] libmachine: (ha-049078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:8c:e8", ip: ""} in network mk-ha-049078: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:04:8c:e8 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ha-049078 Clientid:01:52:54:00:04:8c:e8}
	I0802 17:45:29.664350   27174 main.go:141] libmachine: (ha-049078) DBG | domain ha-049078 has defined IP address 192.168.39.88 and MAC address 52:54:00:04:8c:e8 in network mk-ha-049078
	I0802 17:45:29.664488   27174 host.go:66] Checking if "ha-049078" exists ...
	I0802 17:45:29.664823   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.664857   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.679369   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0802 17:45:29.679827   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.680296   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.680320   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.680655   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.680838   27174 main.go:141] libmachine: (ha-049078) Calling .DriverName
	I0802 17:45:29.681005   27174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:45:29.681025   27174 main.go:141] libmachine: (ha-049078) Calling .GetSSHHostname
	I0802 17:45:29.683594   27174 main.go:141] libmachine: (ha-049078) DBG | domain ha-049078 has defined MAC address 52:54:00:04:8c:e8 in network mk-ha-049078
	I0802 17:45:29.683968   27174 main.go:141] libmachine: (ha-049078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:8c:e8", ip: ""} in network mk-ha-049078: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:04:8c:e8 Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:ha-049078 Clientid:01:52:54:00:04:8c:e8}
	I0802 17:45:29.684000   27174 main.go:141] libmachine: (ha-049078) DBG | domain ha-049078 has defined IP address 192.168.39.88 and MAC address 52:54:00:04:8c:e8 in network mk-ha-049078
	I0802 17:45:29.684106   27174 main.go:141] libmachine: (ha-049078) Calling .GetSSHPort
	I0802 17:45:29.684275   27174 main.go:141] libmachine: (ha-049078) Calling .GetSSHKeyPath
	I0802 17:45:29.684410   27174 main.go:141] libmachine: (ha-049078) Calling .GetSSHUsername
	I0802 17:45:29.684523   27174 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/ha-049078/id_rsa Username:docker}
	I0802 17:45:29.762025   27174 ssh_runner.go:195] Run: systemctl --version
	I0802 17:45:29.767847   27174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:45:29.781075   27174 kubeconfig.go:125] found "ha-049078" server: "https://192.168.39.254:8443"
	I0802 17:45:29.781103   27174 api_server.go:166] Checking apiserver status ...
	I0802 17:45:29.781131   27174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:45:29.795079   27174 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1907/cgroup
	W0802 17:45:29.804548   27174 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1907/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:45:29.804612   27174 ssh_runner.go:195] Run: ls
	I0802 17:45:29.808779   27174 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:45:29.814314   27174 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:45:29.814335   27174 status.go:422] ha-049078 apiserver status = Running (err=<nil>)
	I0802 17:45:29.814344   27174 status.go:257] ha-049078 status: &{Name:ha-049078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:45:29.814359   27174 status.go:255] checking status of ha-049078-m02 ...
	I0802 17:45:29.814642   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.814672   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.829634   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0802 17:45:29.830005   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.830408   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.830431   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.830751   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.830946   27174 main.go:141] libmachine: (ha-049078-m02) Calling .GetState
	I0802 17:45:29.832566   27174 status.go:330] ha-049078-m02 host status = "Stopped" (err=<nil>)
	I0802 17:45:29.832580   27174 status.go:343] host is not running, skipping remaining checks
	I0802 17:45:29.832587   27174 status.go:257] ha-049078-m02 status: &{Name:ha-049078-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:45:29.832602   27174 status.go:255] checking status of ha-049078-m03 ...
	I0802 17:45:29.832879   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.832914   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.847516   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0802 17:45:29.847856   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.848267   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.848285   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.848656   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.848837   27174 main.go:141] libmachine: (ha-049078-m03) Calling .GetState
	I0802 17:45:29.850406   27174 status.go:330] ha-049078-m03 host status = "Running" (err=<nil>)
	I0802 17:45:29.850423   27174 host.go:66] Checking if "ha-049078-m03" exists ...
	I0802 17:45:29.850716   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.850748   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.864873   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0802 17:45:29.865224   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.865707   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.865731   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.866022   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.866204   27174 main.go:141] libmachine: (ha-049078-m03) Calling .GetIP
	I0802 17:45:29.868897   27174 main.go:141] libmachine: (ha-049078-m03) DBG | domain ha-049078-m03 has defined MAC address 52:54:00:c1:28:6d in network mk-ha-049078
	I0802 17:45:29.869313   27174 main.go:141] libmachine: (ha-049078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:28:6d", ip: ""} in network mk-ha-049078: {Iface:virbr1 ExpiryTime:2024-08-02 18:42:47 +0000 UTC Type:0 Mac:52:54:00:c1:28:6d Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-049078-m03 Clientid:01:52:54:00:c1:28:6d}
	I0802 17:45:29.869341   27174 main.go:141] libmachine: (ha-049078-m03) DBG | domain ha-049078-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:c1:28:6d in network mk-ha-049078
	I0802 17:45:29.869454   27174 host.go:66] Checking if "ha-049078-m03" exists ...
	I0802 17:45:29.869770   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:29.869802   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:29.883883   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
	I0802 17:45:29.884359   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:29.884794   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:29.884810   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:29.885114   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:29.885283   27174 main.go:141] libmachine: (ha-049078-m03) Calling .DriverName
	I0802 17:45:29.885489   27174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:45:29.885510   27174 main.go:141] libmachine: (ha-049078-m03) Calling .GetSSHHostname
	I0802 17:45:29.887809   27174 main.go:141] libmachine: (ha-049078-m03) DBG | domain ha-049078-m03 has defined MAC address 52:54:00:c1:28:6d in network mk-ha-049078
	I0802 17:45:29.888196   27174 main.go:141] libmachine: (ha-049078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:28:6d", ip: ""} in network mk-ha-049078: {Iface:virbr1 ExpiryTime:2024-08-02 18:42:47 +0000 UTC Type:0 Mac:52:54:00:c1:28:6d Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-049078-m03 Clientid:01:52:54:00:c1:28:6d}
	I0802 17:45:29.888222   27174 main.go:141] libmachine: (ha-049078-m03) DBG | domain ha-049078-m03 has defined IP address 192.168.39.66 and MAC address 52:54:00:c1:28:6d in network mk-ha-049078
	I0802 17:45:29.888332   27174 main.go:141] libmachine: (ha-049078-m03) Calling .GetSSHPort
	I0802 17:45:29.888472   27174 main.go:141] libmachine: (ha-049078-m03) Calling .GetSSHKeyPath
	I0802 17:45:29.888656   27174 main.go:141] libmachine: (ha-049078-m03) Calling .GetSSHUsername
	I0802 17:45:29.888808   27174 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/ha-049078-m03/id_rsa Username:docker}
	I0802 17:45:29.968329   27174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:45:29.989977   27174 kubeconfig.go:125] found "ha-049078" server: "https://192.168.39.254:8443"
	I0802 17:45:29.990012   27174 api_server.go:166] Checking apiserver status ...
	I0802 17:45:29.990053   27174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:45:30.004599   27174 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1945/cgroup
	W0802 17:45:30.014737   27174 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1945/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:45:30.014812   27174 ssh_runner.go:195] Run: ls
	I0802 17:45:30.019068   27174 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:45:30.023175   27174 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:45:30.023197   27174 status.go:422] ha-049078-m03 apiserver status = Running (err=<nil>)
	I0802 17:45:30.023205   27174 status.go:257] ha-049078-m03 status: &{Name:ha-049078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:45:30.023219   27174 status.go:255] checking status of ha-049078-m04 ...
	I0802 17:45:30.023506   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:30.023534   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:30.039488   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33433
	I0802 17:45:30.039932   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:30.040449   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:30.040472   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:30.040789   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:30.040962   27174 main.go:141] libmachine: (ha-049078-m04) Calling .GetState
	I0802 17:45:30.042362   27174 status.go:330] ha-049078-m04 host status = "Running" (err=<nil>)
	I0802 17:45:30.042379   27174 host.go:66] Checking if "ha-049078-m04" exists ...
	I0802 17:45:30.042652   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:30.042684   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:30.056804   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
	I0802 17:45:30.057171   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:30.057593   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:30.057612   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:30.057946   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:30.058124   27174 main.go:141] libmachine: (ha-049078-m04) Calling .GetIP
	I0802 17:45:30.060960   27174 main.go:141] libmachine: (ha-049078-m04) DBG | domain ha-049078-m04 has defined MAC address 52:54:00:ac:00:7d in network mk-ha-049078
	I0802 17:45:30.061350   27174 main.go:141] libmachine: (ha-049078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:00:7d", ip: ""} in network mk-ha-049078: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:15 +0000 UTC Type:0 Mac:52:54:00:ac:00:7d Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-049078-m04 Clientid:01:52:54:00:ac:00:7d}
	I0802 17:45:30.061376   27174 main.go:141] libmachine: (ha-049078-m04) DBG | domain ha-049078-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:ac:00:7d in network mk-ha-049078
	I0802 17:45:30.061529   27174 host.go:66] Checking if "ha-049078-m04" exists ...
	I0802 17:45:30.061816   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:45:30.061856   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:30.076570   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0802 17:45:30.076929   27174 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:30.077391   27174 main.go:141] libmachine: Using API Version  1
	I0802 17:45:30.077416   27174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:30.077699   27174 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:30.077870   27174 main.go:141] libmachine: (ha-049078-m04) Calling .DriverName
	I0802 17:45:30.078043   27174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:45:30.078060   27174 main.go:141] libmachine: (ha-049078-m04) Calling .GetSSHHostname
	I0802 17:45:30.080668   27174 main.go:141] libmachine: (ha-049078-m04) DBG | domain ha-049078-m04 has defined MAC address 52:54:00:ac:00:7d in network mk-ha-049078
	I0802 17:45:30.081061   27174 main.go:141] libmachine: (ha-049078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:00:7d", ip: ""} in network mk-ha-049078: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:15 +0000 UTC Type:0 Mac:52:54:00:ac:00:7d Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-049078-m04 Clientid:01:52:54:00:ac:00:7d}
	I0802 17:45:30.081105   27174 main.go:141] libmachine: (ha-049078-m04) DBG | domain ha-049078-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:ac:00:7d in network mk-ha-049078
	I0802 17:45:30.081260   27174 main.go:141] libmachine: (ha-049078-m04) Calling .GetSSHPort
	I0802 17:45:30.081437   27174 main.go:141] libmachine: (ha-049078-m04) Calling .GetSSHKeyPath
	I0802 17:45:30.081632   27174 main.go:141] libmachine: (ha-049078-m04) Calling .GetSSHUsername
	I0802 17:45:30.081790   27174 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/ha-049078-m04/id_rsa Username:docker}
	I0802 17:45:30.157999   27174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:45:30.172308   27174 status.go:257] ha-049078-m04 status: &{Name:ha-049078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 node start m02 -v=7 --alsologtostderr
E0802 17:45:43.914075   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-049078 node start m02 -v=7 --alsologtostderr: (46.505364954s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (291.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-049078 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-049078 -v=7 --alsologtostderr
E0802 17:46:18.808251   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-049078 -v=7 --alsologtostderr: (32.480530601s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-049078 --wait=true -v=7 --alsologtostderr
E0802 17:47:05.835095   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:49:21.992776   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 17:49:49.675862   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-049078 --wait=true -v=7 --alsologtostderr: (4m19.261168433s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-049078
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (291.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (4.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-049078 node delete m03 -v=7 --alsologtostderr: (4.062973429s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (4.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (28.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 stop -v=7 --alsologtostderr
E0802 17:51:18.808300   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-049078 stop -v=7 --alsologtostderr: (28.203673413s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr: exit status 7 (100.700914ms)

                                                
                                                
-- stdout --
	ha-049078
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-049078-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-049078-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:51:43.682808   29678 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:51:43.683107   29678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:43.683123   29678 out.go:304] Setting ErrFile to fd 2...
	I0802 17:51:43.683129   29678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:43.683372   29678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 17:51:43.683549   29678 out.go:298] Setting JSON to false
	I0802 17:51:43.683572   29678 mustload.go:65] Loading cluster: ha-049078
	I0802 17:51:43.683616   29678 notify.go:220] Checking for updates...
	I0802 17:51:43.684005   29678 config.go:182] Loaded profile config "ha-049078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 17:51:43.684023   29678 status.go:255] checking status of ha-049078 ...
	I0802 17:51:43.684482   29678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:51:43.684563   29678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:43.703056   29678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0802 17:51:43.703472   29678 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:43.703976   29678 main.go:141] libmachine: Using API Version  1
	I0802 17:51:43.703998   29678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:43.704382   29678 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:43.704595   29678 main.go:141] libmachine: (ha-049078) Calling .GetState
	I0802 17:51:43.706344   29678 status.go:330] ha-049078 host status = "Stopped" (err=<nil>)
	I0802 17:51:43.706358   29678 status.go:343] host is not running, skipping remaining checks
	I0802 17:51:43.706364   29678 status.go:257] ha-049078 status: &{Name:ha-049078 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:43.706394   29678 status.go:255] checking status of ha-049078-m02 ...
	I0802 17:51:43.706677   29678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:51:43.706715   29678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:43.721290   29678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0802 17:51:43.721682   29678 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:43.722168   29678 main.go:141] libmachine: Using API Version  1
	I0802 17:51:43.722199   29678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:43.722535   29678 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:43.722730   29678 main.go:141] libmachine: (ha-049078-m02) Calling .GetState
	I0802 17:51:43.724202   29678 status.go:330] ha-049078-m02 host status = "Stopped" (err=<nil>)
	I0802 17:51:43.724216   29678 status.go:343] host is not running, skipping remaining checks
	I0802 17:51:43.724222   29678 status.go:257] ha-049078-m02 status: &{Name:ha-049078-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:43.724251   29678 status.go:255] checking status of ha-049078-m04 ...
	I0802 17:51:43.724620   29678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 17:51:43.724663   29678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:43.739256   29678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0802 17:51:43.739712   29678 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:43.740146   29678 main.go:141] libmachine: Using API Version  1
	I0802 17:51:43.740176   29678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:43.740476   29678 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:43.740675   29678 main.go:141] libmachine: (ha-049078-m04) Calling .GetState
	I0802 17:51:43.742228   29678 status.go:330] ha-049078-m04 host status = "Stopped" (err=<nil>)
	I0802 17:51:43.742241   29678 status.go:343] host is not running, skipping remaining checks
	I0802 17:51:43.742247   29678 status.go:257] ha-049078-m04 status: &{Name:ha-049078-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (28.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (144.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-049078 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0802 17:52:41.856967   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-049078 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m23.710549977s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (144.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-049078 --control-plane -v=7 --alsologtostderr
E0802 17:54:21.992294   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-049078 --control-plane -v=7 --alsologtostderr: (1m21.536042587s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-049078 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-869147 --driver=kvm2 
E0802 17:56:18.808219   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-869147 --driver=kvm2 : (49.640261504s)
--- PASS: TestImageBuild/serial/Setup (49.64s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-869147
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-869147: (1.97673539s)
--- PASS: TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-869147
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-869147: (1.046419936s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-869147
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-869147
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-154607 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-154607 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m6.721757966s)
--- PASS: TestJSONOutput/start/Command (66.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-154607 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-154607 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.56s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-154607 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-154607 --output=json --user=testUser: (7.560807406s)
--- PASS: TestJSONOutput/stop/Command (7.56s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-226715 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-226715 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.916482ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3ae9116-5a01-434a-8050-5bd425551cba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-226715] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e675d931-7368-4d34-81c4-1e8849b6a693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"f6accf23-1777-423c-a8b1-c1560d720d66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0da23f48-28eb-41ef-b424-36169f486145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig"}}
	{"specversion":"1.0","id":"6cd3599d-f1be-4c0b-9947-764850a321bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube"}}
	{"specversion":"1.0","id":"bcd632a4-4e5c-4a12-b746-89bc7793dad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2656d1d1-4f59-4d48-b7ca-2896d2ea9578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"47114fa4-e443-42cb-9826-f72d788eaca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-226715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-226715
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-479081 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-479081 --driver=kvm2 : (51.126477846s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-481899 --driver=kvm2 
E0802 17:59:21.992304   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-481899 --driver=kvm2 : (46.754732083s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-479081
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-481899
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-481899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-481899
helpers_test.go:175: Cleaning up "first-479081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-479081
--- PASS: TestMinikubeProfile (100.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-323661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-323661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.978818359s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-323661 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-323661 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-342195 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-342195 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.849723523s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-342195 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-342195 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-323661 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-342195 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-342195 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-342195
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-342195: (2.274531762s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-342195
E0802 18:00:45.038594   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-342195: (23.435311062s)
--- PASS: TestMountStart/serial/RestartStopped (24.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-342195 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-342195 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (146.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-279607 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0802 18:01:18.807832   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-279607 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m25.961771311s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (146.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-279607 -- rollout status deployment/busybox: (3.172553161s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-gpl5d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-gpl5d -- nslookup kubernetes.io: (1.303225418s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-lvbqs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-gpl5d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-lvbqs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-gpl5d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-lvbqs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-gpl5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-gpl5d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-lvbqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-279607 -- exec busybox-fc5497c4f-lvbqs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-279607 -v 3 --alsologtostderr
E0802 18:04:21.992193   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-279607 -v 3 --alsologtostderr: (54.168113438s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-279607 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp testdata/cp-test.txt multinode-279607:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile470148933/001/cp-test_multinode-279607.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607:/home/docker/cp-test.txt multinode-279607-m02:/home/docker/cp-test_multinode-279607_multinode-279607-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m02 "sudo cat /home/docker/cp-test_multinode-279607_multinode-279607-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607:/home/docker/cp-test.txt multinode-279607-m03:/home/docker/cp-test_multinode-279607_multinode-279607-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m03 "sudo cat /home/docker/cp-test_multinode-279607_multinode-279607-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp testdata/cp-test.txt multinode-279607-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile470148933/001/cp-test_multinode-279607-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607-m02:/home/docker/cp-test.txt multinode-279607:/home/docker/cp-test_multinode-279607-m02_multinode-279607.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607 "sudo cat /home/docker/cp-test_multinode-279607-m02_multinode-279607.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607-m02:/home/docker/cp-test.txt multinode-279607-m03:/home/docker/cp-test_multinode-279607-m02_multinode-279607-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m03 "sudo cat /home/docker/cp-test_multinode-279607-m02_multinode-279607-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp testdata/cp-test.txt multinode-279607-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile470148933/001/cp-test_multinode-279607-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607-m03:/home/docker/cp-test.txt multinode-279607:/home/docker/cp-test_multinode-279607-m03_multinode-279607.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607 "sudo cat /home/docker/cp-test_multinode-279607-m03_multinode-279607.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 cp multinode-279607-m03:/home/docker/cp-test.txt multinode-279607-m02:/home/docker/cp-test_multinode-279607-m03_multinode-279607-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 ssh -n multinode-279607-m02 "sudo cat /home/docker/cp-test_multinode-279607-m03_multinode-279607-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-279607 node stop m03: (2.560036694s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-279607 status: exit status 7 (437.052494ms)

                                                
                                                
-- stdout --
	multinode-279607
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-279607-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-279607-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr: exit status 7 (467.899373ms)

                                                
                                                
-- stdout --
	multinode-279607
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-279607-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-279607-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:04:33.480484   38061 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:04:33.480810   38061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:04:33.480824   38061 out.go:304] Setting ErrFile to fd 2...
	I0802 18:04:33.480841   38061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:04:33.481507   38061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 18:04:33.481739   38061 out.go:298] Setting JSON to false
	I0802 18:04:33.481773   38061 mustload.go:65] Loading cluster: multinode-279607
	I0802 18:04:33.481815   38061 notify.go:220] Checking for updates...
	I0802 18:04:33.482186   38061 config.go:182] Loaded profile config "multinode-279607": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 18:04:33.482205   38061 status.go:255] checking status of multinode-279607 ...
	I0802 18:04:33.482613   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.482695   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.500310   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0802 18:04:33.500798   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.501560   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.501590   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.502287   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.502705   38061 main.go:141] libmachine: (multinode-279607) Calling .GetState
	I0802 18:04:33.505572   38061 status.go:330] multinode-279607 host status = "Running" (err=<nil>)
	I0802 18:04:33.505604   38061 host.go:66] Checking if "multinode-279607" exists ...
	I0802 18:04:33.506039   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.506091   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.523577   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0802 18:04:33.524185   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.524806   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.524826   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.525271   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.525632   38061 main.go:141] libmachine: (multinode-279607) Calling .GetIP
	I0802 18:04:33.530050   38061 main.go:141] libmachine: (multinode-279607) DBG | domain multinode-279607 has defined MAC address 52:54:00:98:21:fd in network mk-multinode-279607
	I0802 18:04:33.530505   38061 main.go:141] libmachine: (multinode-279607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:fd", ip: ""} in network mk-multinode-279607: {Iface:virbr1 ExpiryTime:2024-08-02 19:01:09 +0000 UTC Type:0 Mac:52:54:00:98:21:fd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:multinode-279607 Clientid:01:52:54:00:98:21:fd}
	I0802 18:04:33.530653   38061 main.go:141] libmachine: (multinode-279607) DBG | domain multinode-279607 has defined IP address 192.168.39.224 and MAC address 52:54:00:98:21:fd in network mk-multinode-279607
	I0802 18:04:33.530683   38061 host.go:66] Checking if "multinode-279607" exists ...
	I0802 18:04:33.531115   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.531178   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.547384   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34087
	I0802 18:04:33.548053   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.548843   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.548876   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.549319   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.549582   38061 main.go:141] libmachine: (multinode-279607) Calling .DriverName
	I0802 18:04:33.549869   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:04:33.549910   38061 main.go:141] libmachine: (multinode-279607) Calling .GetSSHHostname
	I0802 18:04:33.553214   38061 main.go:141] libmachine: (multinode-279607) DBG | domain multinode-279607 has defined MAC address 52:54:00:98:21:fd in network mk-multinode-279607
	I0802 18:04:33.554040   38061 main.go:141] libmachine: (multinode-279607) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:21:fd", ip: ""} in network mk-multinode-279607: {Iface:virbr1 ExpiryTime:2024-08-02 19:01:09 +0000 UTC Type:0 Mac:52:54:00:98:21:fd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:multinode-279607 Clientid:01:52:54:00:98:21:fd}
	I0802 18:04:33.554074   38061 main.go:141] libmachine: (multinode-279607) DBG | domain multinode-279607 has defined IP address 192.168.39.224 and MAC address 52:54:00:98:21:fd in network mk-multinode-279607
	I0802 18:04:33.554267   38061 main.go:141] libmachine: (multinode-279607) Calling .GetSSHPort
	I0802 18:04:33.554493   38061 main.go:141] libmachine: (multinode-279607) Calling .GetSSHKeyPath
	I0802 18:04:33.554661   38061 main.go:141] libmachine: (multinode-279607) Calling .GetSSHUsername
	I0802 18:04:33.554823   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/multinode-279607/id_rsa Username:docker}
	I0802 18:04:33.643694   38061 ssh_runner.go:195] Run: systemctl --version
	I0802 18:04:33.653218   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:04:33.673254   38061 kubeconfig.go:125] found "multinode-279607" server: "https://192.168.39.224:8443"
	I0802 18:04:33.673287   38061 api_server.go:166] Checking apiserver status ...
	I0802 18:04:33.673322   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:04:33.690298   38061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1921/cgroup
	W0802 18:04:33.702022   38061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1921/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:04:33.702081   38061 ssh_runner.go:195] Run: ls
	I0802 18:04:33.707543   38061 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8443/healthz ...
	I0802 18:04:33.712987   38061 api_server.go:279] https://192.168.39.224:8443/healthz returned 200:
	ok
	I0802 18:04:33.713022   38061 status.go:422] multinode-279607 apiserver status = Running (err=<nil>)
	I0802 18:04:33.713036   38061 status.go:257] multinode-279607 status: &{Name:multinode-279607 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:04:33.713058   38061 status.go:255] checking status of multinode-279607-m02 ...
	I0802 18:04:33.713379   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.713437   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.729979   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I0802 18:04:33.730374   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.730904   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.730928   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.731257   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.731446   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .GetState
	I0802 18:04:33.733353   38061 status.go:330] multinode-279607-m02 host status = "Running" (err=<nil>)
	I0802 18:04:33.733371   38061 host.go:66] Checking if "multinode-279607-m02" exists ...
	I0802 18:04:33.733689   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.733746   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.749904   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0802 18:04:33.750453   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.750984   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.751004   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.751383   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.751612   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .GetIP
	I0802 18:04:33.755997   38061 main.go:141] libmachine: (multinode-279607-m02) DBG | domain multinode-279607-m02 has defined MAC address 52:54:00:59:4e:81 in network mk-multinode-279607
	I0802 18:04:33.756816   38061 main.go:141] libmachine: (multinode-279607-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4e:81", ip: ""} in network mk-multinode-279607: {Iface:virbr1 ExpiryTime:2024-08-02 19:02:37 +0000 UTC Type:0 Mac:52:54:00:59:4e:81 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-279607-m02 Clientid:01:52:54:00:59:4e:81}
	I0802 18:04:33.756852   38061 main.go:141] libmachine: (multinode-279607-m02) DBG | domain multinode-279607-m02 has defined IP address 192.168.39.64 and MAC address 52:54:00:59:4e:81 in network mk-multinode-279607
	I0802 18:04:33.757022   38061 host.go:66] Checking if "multinode-279607-m02" exists ...
	I0802 18:04:33.757334   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.757371   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.775113   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0802 18:04:33.775485   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.775951   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.775971   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.776247   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.776519   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .DriverName
	I0802 18:04:33.776739   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:04:33.776759   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .GetSSHHostname
	I0802 18:04:33.780487   38061 main.go:141] libmachine: (multinode-279607-m02) DBG | domain multinode-279607-m02 has defined MAC address 52:54:00:59:4e:81 in network mk-multinode-279607
	I0802 18:04:33.781322   38061 main.go:141] libmachine: (multinode-279607-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4e:81", ip: ""} in network mk-multinode-279607: {Iface:virbr1 ExpiryTime:2024-08-02 19:02:37 +0000 UTC Type:0 Mac:52:54:00:59:4e:81 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-279607-m02 Clientid:01:52:54:00:59:4e:81}
	I0802 18:04:33.781352   38061 main.go:141] libmachine: (multinode-279607-m02) DBG | domain multinode-279607-m02 has defined IP address 192.168.39.64 and MAC address 52:54:00:59:4e:81 in network mk-multinode-279607
	I0802 18:04:33.781697   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .GetSSHPort
	I0802 18:04:33.782040   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .GetSSHKeyPath
	I0802 18:04:33.782380   38061 main.go:141] libmachine: (multinode-279607-m02) Calling .GetSSHUsername
	I0802 18:04:33.782621   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5398/.minikube/machines/multinode-279607-m02/id_rsa Username:docker}
	I0802 18:04:33.862176   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:04:33.876842   38061 status.go:257] multinode-279607-m02 status: &{Name:multinode-279607-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:04:33.876882   38061 status.go:255] checking status of multinode-279607-m03 ...
	I0802 18:04:33.877196   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:04:33.877241   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:04:33.892814   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0802 18:04:33.893286   38061 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:04:33.893835   38061 main.go:141] libmachine: Using API Version  1
	I0802 18:04:33.893858   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:04:33.894168   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:04:33.894392   38061 main.go:141] libmachine: (multinode-279607-m03) Calling .GetState
	I0802 18:04:33.895926   38061 status.go:330] multinode-279607-m03 host status = "Stopped" (err=<nil>)
	I0802 18:04:33.895957   38061 status.go:343] host is not running, skipping remaining checks
	I0802 18:04:33.895965   38061 status.go:257] multinode-279607-m03 status: &{Name:multinode-279607-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-279607 node start m03 -v=7 --alsologtostderr: (41.779309252s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (188.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-279607
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-279607
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-279607: (27.079475699s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-279607 --wait=true -v=8 --alsologtostderr
E0802 18:06:18.808662   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-279607 --wait=true -v=8 --alsologtostderr: (2m41.645949969s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-279607
--- PASS: TestMultiNode/serial/RestartKeepsNodes (188.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-279607 node delete m03: (1.783896687s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-279607 stop: (25.595927971s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-279607 status: exit status 7 (81.396911ms)

                                                
                                                
-- stdout --
	multinode-279607
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-279607-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr: exit status 7 (83.203938ms)

                                                
                                                
-- stdout --
	multinode-279607
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-279607-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:08:53.123064   39853 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:08:53.123196   39853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:08:53.123208   39853 out.go:304] Setting ErrFile to fd 2...
	I0802 18:08:53.123215   39853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:08:53.123373   39853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5398/.minikube/bin
	I0802 18:08:53.123528   39853 out.go:298] Setting JSON to false
	I0802 18:08:53.123552   39853 mustload.go:65] Loading cluster: multinode-279607
	I0802 18:08:53.123592   39853 notify.go:220] Checking for updates...
	I0802 18:08:53.124034   39853 config.go:182] Loaded profile config "multinode-279607": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0802 18:08:53.124054   39853 status.go:255] checking status of multinode-279607 ...
	I0802 18:08:53.124580   39853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:08:53.124622   39853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:08:53.144574   39853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0802 18:08:53.145064   39853 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:08:53.145809   39853 main.go:141] libmachine: Using API Version  1
	I0802 18:08:53.145836   39853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:08:53.146268   39853 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:08:53.146529   39853 main.go:141] libmachine: (multinode-279607) Calling .GetState
	I0802 18:08:53.148314   39853 status.go:330] multinode-279607 host status = "Stopped" (err=<nil>)
	I0802 18:08:53.148330   39853 status.go:343] host is not running, skipping remaining checks
	I0802 18:08:53.148338   39853 status.go:257] multinode-279607 status: &{Name:multinode-279607 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:08:53.148382   39853 status.go:255] checking status of multinode-279607-m02 ...
	I0802 18:08:53.148676   39853 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0802 18:08:53.148714   39853 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:08:53.163357   39853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I0802 18:08:53.163824   39853 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:08:53.164362   39853 main.go:141] libmachine: Using API Version  1
	I0802 18:08:53.164395   39853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:08:53.164676   39853 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:08:53.164862   39853 main.go:141] libmachine: (multinode-279607-m02) Calling .GetState
	I0802 18:08:53.166213   39853 status.go:330] multinode-279607-m02 host status = "Stopped" (err=<nil>)
	I0802 18:08:53.166226   39853 status.go:343] host is not running, skipping remaining checks
	I0802 18:08:53.166232   39853 status.go:257] multinode-279607-m02 status: &{Name:multinode-279607-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (123.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-279607 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0802 18:09:21.857502   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 18:09:21.992129   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-279607 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m2.752720277s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-279607 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (123.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-279607
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-279607-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-279607-m02 --driver=kvm2 : exit status 14 (59.834037ms)

                                                
                                                
-- stdout --
	* [multinode-279607-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-279607-m02' is duplicated with machine name 'multinode-279607-m02' in profile 'multinode-279607'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-279607-m03 --driver=kvm2 
E0802 18:11:18.809445   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-279607-m03 --driver=kvm2 : (47.860918189s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-279607
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-279607: exit status 80 (198.586821ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-279607 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-279607-m03 already exists in multinode-279607-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-279607-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.96s)

                                                
                                    
x
+
TestPreload (155.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-348962 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-348962 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m27.202047829s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-348962 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-348962 image pull gcr.io/k8s-minikube/busybox: (1.295072536s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-348962
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-348962: (12.580592745s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-348962 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-348962 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (53.127081787s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-348962 image list
helpers_test.go:175: Cleaning up "test-preload-348962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-348962
E0802 18:14:21.992205   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
--- PASS: TestPreload (155.25s)

                                                
                                    
x
+
TestScheduledStopUnix (120.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-673165 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-673165 --memory=2048 --driver=kvm2 : (49.206727029s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673165 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-673165 -n scheduled-stop-673165
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673165 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673165 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-673165 -n scheduled-stop-673165
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-673165
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-673165 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0802 18:16:18.809634   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-673165
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-673165: exit status 7 (65.008849ms)

                                                
                                                
-- stdout --
	scheduled-stop-673165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-673165 -n scheduled-stop-673165
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-673165 -n scheduled-stop-673165: exit status 7 (61.499774ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-673165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-673165
--- PASS: TestScheduledStopUnix (120.78s)

                                                
                                    
x
+
TestSkaffold (127.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe354263823 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-754678 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-754678 --memory=2600 --driver=kvm2 : (48.251217714s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe354263823 run --minikube-profile skaffold-754678 --kube-context skaffold-754678 --status-check=true --port-forward=false --interactive=false
E0802 18:17:25.038988   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe354263823 run --minikube-profile skaffold-754678 --kube-context skaffold-754678 --status-check=true --port-forward=false --interactive=false: (1m6.711000599s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-654c4745d6-vbb7j" [641a2c72-59ed-46d3-8054-e945c657401e] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00327811s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-597786d499-p552k" [4212da67-25e5-4f7d-ba91-d0ed1f20638b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004795736s
helpers_test.go:175: Cleaning up "skaffold-754678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-754678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-754678: (1.149700446s)
--- PASS: TestSkaffold (127.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.684612929 start -p running-upgrade-730354 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.684612929 start -p running-upgrade-730354 --memory=2200 --vm-driver=kvm2 : (1m49.250207869s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-730354 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-730354 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m2.052244211s)
helpers_test.go:175: Cleaning up "running-upgrade-730354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-730354
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-730354: (1.111395537s)
--- PASS: TestRunningBinaryUpgrade (172.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (210.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (57.076722149s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-743210
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-743210: (12.478186178s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-743210 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-743210 status --format={{.Host}}: exit status 7 (71.281963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (1m24.883221303s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-743210 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (100.732567ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-743210] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-743210
	    minikube start -p kubernetes-upgrade-743210 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7432102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-743210 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 
E0802 18:21:18.808793   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-743210 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2 : (54.127511085s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-743210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-743210
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-743210: (1.284171946s)
--- PASS: TestKubernetesUpgrade (210.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (198.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.854679433 start -p stopped-upgrade-770309 --memory=2200 --vm-driver=kvm2 
E0802 18:19:21.992809   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.854679433 start -p stopped-upgrade-770309 --memory=2200 --vm-driver=kvm2 : (2m5.045225128s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.854679433 -p stopped-upgrade-770309 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.854679433 -p stopped-upgrade-770309 stop: (14.168966465s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-770309 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-770309 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (59.304152942s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (198.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-770309
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-770309: (1.038319283s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestPause/serial/Start (89.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-958320 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0802 18:23:21.168430   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:23.729286   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:23:28.850287   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-958320 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m29.326132422s)
--- PASS: TestPause/serial/Start (89.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626910 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-626910 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (80.092623ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-626910] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5398/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5398/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626910 --driver=kvm2 
E0802 18:23:59.572056   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626910 --driver=kvm2 : (1m36.772970955s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-626910 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m13.256723627s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-958320 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-958320 --alsologtostderr -v=1 --driver=kvm2 : (1m12.774070203s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (72.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626910 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626910 --no-kubernetes --driver=kvm2 : (8.569616201s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-626910 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-626910 status -o json: exit status 2 (233.757072ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-626910","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-626910
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-626910: (1.106619486s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626910 --no-kubernetes --driver=kvm2 
E0802 18:26:01.857686   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 18:26:02.452464   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626910 --no-kubernetes --driver=kvm2 : (35.059874506s)
--- PASS: TestNoKubernetes/serial/Start (35.06s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-958320 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vmldg" [e2daab68-f3d2-4d13-96b3-98ab8a5369b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vmldg" [e2daab68-f3d2-4d13-96b3-98ab8a5369b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004223697s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-958320 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-958320 --output=json --layout=cluster: exit status 2 (248.134424ms)

                                                
                                                
-- stdout --
	{"Name":"pause-958320","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-958320","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-958320 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-958320 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-958320 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-958320 --alsologtostderr -v=5: (1.121552205s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m27.615508111s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-626910 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-626910 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.170999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-626910
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-626910: (2.288835262s)
--- PASS: TestNoKubernetes/serial/Stop (2.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (50.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626910 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626910 --driver=kvm2 : (50.676507961s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (50.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (127.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m7.153037267s)
--- PASS: TestNetworkPlugins/group/calico/Start (127.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-626910 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-626910 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.331061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (106.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m46.851168599s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (106.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nqzmm" [c2439493-da77-4b3b-86ed-e328382fd820] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004823477s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tp9k2" [00b41248-87a5-4569-97f5-b804a54a1913] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0802 18:27:44.461307   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:44.466652   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:44.476964   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:44.497298   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:44.538124   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:44.618494   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:44.778827   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:45.099521   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:45.740207   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-tp9k2" [00b41248-87a5-4569-97f5-b804a54a1913] Running
E0802 18:27:47.021109   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:27:49.581964   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003657626s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (75.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0802 18:28:18.607093   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:28:25.423528   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m15.517026832s)
--- PASS: TestNetworkPlugins/group/false/Start (75.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (117.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m57.637434925s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (117.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tgmml" [42cb9ec3-040a-49c0-8825-16568a207869] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006450877s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-95ct9" [5b343bb1-3524-48c5-9860-726f70f4decd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0802 18:28:46.293559   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-95ct9" [5b343bb1-3524-48c5-9860-726f70f4decd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004672149s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r2d8g" [fb351b08-d52b-4bf5-944b-5e4e70fe5285] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-r2d8g" [fb351b08-d52b-4bf5-944b-5e4e70fe5285] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004651872s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (82.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m22.893323618s)
--- PASS: TestNetworkPlugins/group/flannel/Start (82.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m37.707382542s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zl9v7" [7787543d-0720-4598-b476-7fd4c2cc68b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zl9v7" [7787543d-0720-4598-b476-7fd4c2cc68b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003883021s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (88.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-111909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m28.35072102s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (88.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-75zqb" [3000ae35-f3c7-487c-af9d-609779b9dc2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0802 18:30:28.305275   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-75zqb" [3000ae35-f3c7-487c-af9d-609779b9dc2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003820402s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z9xqz" [7fe0f71c-49f4-42a6-8d5e-729b7bed3f17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00416291s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tt4g9" [80c5703a-4068-4f3e-a6c5-027fed1f95e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tt4g9" [80c5703a-4068-4f3e-a6c5-027fed1f95e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.006517655s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-681678 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-681678 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m33.649286649s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-111909 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vbqxq" [d067cfa0-eb91-45af-96c6-f5f2c3dbf45b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0802 18:31:03.144350   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.149767   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.160010   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.180477   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.220863   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.301499   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.461980   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:03.783077   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:04.423696   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:31:05.704126   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-vbqxq" [d067cfa0-eb91-45af-96c6-f5f2c3dbf45b] Running
E0802 18:31:08.264847   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.003431006s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-557189 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-557189 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (1m36.362181308s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-111909 exec deployment/netcat -- nslookup kubernetes.default
E0802 18:31:13.385293   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-111909 "pgrep -a kubelet"
E0802 18:31:23.625471   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-111909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fl5qg" [55a5faff-19f7-4a39-860e-4d0dcaafc527] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fl5qg" [55a5faff-19f7-4a39-860e-4d0dcaafc527] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003960739s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (127.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-425999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-425999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (2m7.196863441s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (127.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-111909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-111909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-345743 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0802 18:32:25.066913   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:32:34.424862   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:34.430140   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:34.441275   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:34.461453   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:34.502364   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:34.582868   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:34.743763   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:35.064921   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:35.705145   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:36.985729   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:39.546440   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:32:44.461257   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:32:44.667146   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-345743 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (2m15.407069626s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (135.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-557189 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [916c4533-f1bf-472c-b5d7-484af16a7273] Pending
helpers_test.go:344: "busybox" [916c4533-f1bf-472c-b5d7-484af16a7273] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [916c4533-f1bf-472c-b5d7-484af16a7273] Running
E0802 18:32:54.907648   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003831957s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-557189 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-557189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-557189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043149565s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-557189 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-557189 --alsologtostderr -v=3
E0802 18:33:12.146008   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-557189 --alsologtostderr -v=3: (13.344739161s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-557189 -n no-preload-557189
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-557189 -n no-preload-557189: exit status 7 (64.200365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-557189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-557189 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0802 18:33:15.388125   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:33:18.606935   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-557189 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (4m59.92919518s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-557189 -n no-preload-557189
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-681678 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85ac7d0e-136c-4018-8eaf-1a75b2bd841c] Pending
helpers_test.go:344: "busybox" [85ac7d0e-136c-4018-8eaf-1a75b2bd841c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85ac7d0e-136c-4018-8eaf-1a75b2bd841c] Running
E0802 18:33:35.493385   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:35.498755   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:35.509112   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:35.529525   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:35.569887   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:35.650271   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:35.811128   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:36.131928   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:36.772325   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005244965s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-681678 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-681678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-681678 describe deploy/metrics-server -n kube-system
E0802 18:33:38.053246   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-425999 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cf070064-8dcb-440b-a1b0-ed44192db472] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cf070064-8dcb-440b-a1b0-ed44192db472] Running
E0802 18:33:45.735297   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003593203s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-425999 exec busybox -- /bin/sh -c "ulimit -n"
E0802 18:33:46.987411   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-681678 --alsologtostderr -v=3
E0802 18:33:40.614418   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-681678 --alsologtostderr -v=3: (12.645113142s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-425999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-425999 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-425999 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-425999 --alsologtostderr -v=3: (13.334090941s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-681678 -n old-k8s-version-681678
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-681678 -n old-k8s-version-681678: exit status 7 (75.743873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-681678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (400.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-681678 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0802 18:33:50.927688   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:50.933014   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:50.943321   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:50.963554   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:51.004574   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:51.085544   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:51.245799   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:51.566706   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:52.206946   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:53.487750   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:55.976007   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:33:56.048268   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:33:56.348604   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:34:01.169185   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-681678 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m40.563334695s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-681678 -n old-k8s-version-681678
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (400.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425999 -n embed-certs-425999
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425999 -n embed-certs-425999: exit status 7 (76.628846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-425999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (328.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-425999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3
E0802 18:34:05.039553   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-425999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.3: (5m28.334152095s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425999 -n embed-certs-425999
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (328.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-345743 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10fe31c7-62dc-4890-b4c6-be85fda43696] Pending
helpers_test.go:344: "busybox" [10fe31c7-62dc-4890-b4c6-be85fda43696] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10fe31c7-62dc-4890-b4c6-be85fda43696] Running
E0802 18:34:11.410225   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00421255s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-345743 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-345743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-345743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.173134403s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-345743 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-345743 --alsologtostderr -v=3
E0802 18:34:16.457158   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:34:21.992615   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 18:34:25.918331   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:25.923628   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:25.933902   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:25.954204   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:25.995077   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:26.075395   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:26.236313   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:26.557107   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:27.198104   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:28.478584   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-345743 --alsologtostderr -v=3: (13.37796443s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743: exit status 7 (64.067554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-345743 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (305.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-345743 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3
E0802 18:34:31.039278   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:31.891318   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:34:36.160276   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:46.400888   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:34:57.417984   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:35:06.881786   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:35:12.851543   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:35:18.269517   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:35:25.741871   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:25.747182   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:25.757446   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:25.777762   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:25.818105   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:25.898409   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:26.058640   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:26.379255   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:27.019751   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:28.300602   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:30.861652   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:35.982879   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:36.542548   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:36.547865   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:36.558176   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:36.578489   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:36.618759   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:36.699092   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:36.859714   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:37.180467   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:37.821577   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:39.102078   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:41.662443   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:46.223228   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:35:46.782951   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:35:47.842962   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:35:57.023292   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:36:00.287208   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.292485   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.302781   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.323091   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.363396   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.443792   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.604197   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:00.924813   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:01.565879   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:02.846028   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:03.144925   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:36:05.406370   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:06.703999   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:36:10.526649   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:17.503852   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:36:18.808593   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/addons-723198/client.crt: no such file or directory
E0802 18:36:19.339016   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:36:20.767771   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:24.002575   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.007838   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.018118   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.038436   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.078752   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.159103   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.319539   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:24.640380   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:25.280859   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:26.561423   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:29.121610   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:30.827951   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/auto-111909/client.crt: no such file or directory
E0802 18:36:34.242262   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:34.772590   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:36:41.248858   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:36:44.483258   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:36:47.665163   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
E0802 18:36:58.464048   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
E0802 18:37:04.963792   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:37:09.764041   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
E0802 18:37:22.209323   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:37:34.424610   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:37:44.460952   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/gvisor-349582/client.crt: no such file or directory
E0802 18:37:45.924631   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:38:02.110025   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kindnet-111909/client.crt: no such file or directory
E0802 18:38:09.585621   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/enable-default-cni-111909/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-345743 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.3: (5m5.288376739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (305.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pqwhm" [015b7ba9-ba5c-4bba-b1ef-1fdb251a5c85] Running
E0802 18:38:18.606957   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
E0802 18:38:20.384920   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00458228s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pqwhm" [015b7ba9-ba5c-4bba-b1ef-1fdb251a5c85] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00428786s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-557189 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-557189 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-557189 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-557189 -n no-preload-557189
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-557189 -n no-preload-557189: exit status 2 (241.27588ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-557189 -n no-preload-557189
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-557189 -n no-preload-557189: exit status 2 (240.140197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-557189 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-557189 -n no-preload-557189
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-557189 -n no-preload-557189
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-279767 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
E0802 18:38:35.493598   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:38:44.129891   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/bridge-111909/client.crt: no such file or directory
E0802 18:38:50.927599   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:39:03.179167   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/calico-111909/client.crt: no such file or directory
E0802 18:39:07.845147   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/kubenet-111909/client.crt: no such file or directory
E0802 18:39:18.613609   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/custom-flannel-111909/client.crt: no such file or directory
E0802 18:39:21.993173   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/functional-933143/client.crt: no such file or directory
E0802 18:39:25.918838   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/false-111909/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-279767 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (58.643987634s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-279767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-279767 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-279767 --alsologtostderr -v=3: (13.354979051s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6rm56" [d3f0ea54-e5be-40e4-8fa7-dd199847309e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004650955s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s8qtb" [7bd69274-1f3a-4333-a0d0-5ce23b43eb59] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003464941s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6rm56" [d3f0ea54-e5be-40e4-8fa7-dd199847309e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00382769s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-425999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-425999 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-s8qtb" [7bd69274-1f3a-4333-a0d0-5ce23b43eb59] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004870683s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-345743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-425999 --alsologtostderr -v=1
E0802 18:39:41.654406   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/skaffold-754678/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425999 -n embed-certs-425999
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425999 -n embed-certs-425999: exit status 2 (258.475226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-425999 -n embed-certs-425999
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-425999 -n embed-certs-425999: exit status 2 (240.469773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-425999 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425999 -n embed-certs-425999
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-425999 -n embed-certs-425999
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-279767 -n newest-cni-279767
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-279767 -n newest-cni-279767: exit status 7 (68.493535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-279767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-279767 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-279767 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0-rc.0: (38.13530252s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-279767 -n newest-cni-279767
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-345743 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-345743 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743: exit status 2 (235.01692ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743: exit status 2 (235.070922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-345743 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-345743 -n default-k8s-diff-port-345743
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-279767 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-279767 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-279767 -n newest-cni-279767
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-279767 -n newest-cni-279767: exit status 2 (240.75767ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-279767 -n newest-cni-279767
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-279767 -n newest-cni-279767: exit status 2 (244.367137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-279767 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-279767 -n newest-cni-279767
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-279767 -n newest-cni-279767
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8vnds" [90a7e1b7-aaa7-4aed-a700-0c077b893958] Running
E0802 18:40:36.541733   12563 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5398/.minikube/profiles/flannel-111909/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003675258s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8vnds" [90a7e1b7-aaa7-4aed-a700-0c077b893958] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00369245s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-681678 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-681678 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-681678 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-681678 -n old-k8s-version-681678
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-681678 -n old-k8s-version-681678: exit status 2 (226.717182ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-681678 -n old-k8s-version-681678
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-681678 -n old-k8s-version-681678: exit status 2 (231.817783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-681678 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-681678 -n old-k8s-version-681678
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-681678 -n old-k8s-version-681678
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.22s)

                                                
                                    

Test skip (34/349)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
47 TestAddons/parallel/Olm 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
115 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
193 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
220 TestKicCustomNetwork 0
221 TestKicExistingNetwork 0
222 TestKicCustomSubnet 0
223 TestKicStaticIP 0
255 TestChangeNoneUser 0
258 TestScheduledStopWindows 0
262 TestInsufficientStorage 0
266 TestMissingContainerUpgrade 0
277 TestNetworkPlugins/group/cilium 3.57
288 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-111909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-111909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-111909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-111909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111909"

                                                
                                                
----------------------- debugLogs end: cilium-111909 [took: 3.418136566s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-111909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-111909
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-842670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-842670
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard